Tag Archives: SAT

False Positives

I quit writing about tests. And test prep.  Five, six, years ago? I still taught test prep until this year, always giving in to my old employer’s pleas to teach his Saturday classes. But I largely quit the SAT after the last changes, focusing on the ACT. I still love tests, still enjoy coaching kids for the big day.

Explaining why has been a task I’ve avoided for several years, as the doubt is hard to put into words. 

It was an APUSH review course, the last one I taught, I think. Class hadn’t started for the day, but one of my five students was sitting there highlighting notes. She was a tiny little thing, perky and eager but not intellectually remarkable and it was March of what would have been her junior year.

“This is my last test prep course. I’ve taken the SAT for the fourth time, took AP Calculus BC last year, and I’m all done.”

“Yay! How’d you do on the SAT?”

“2400,” she said, casually. “I got 2000 the first time, but I spent the whole summer in two prep courses, plus over Christmas.”

Boom.

Like I said, she was….ordinary. Bright, sure. But her APUSH essays were predictable, regurgitating the key points she’d read in the prep material–pedestrian grammar, too many commas. Her lexile level was unimpressive. Nothing terrible. I gave her some tips. 

This girl had placed in the 99th percentile for the SAT but couldn’t write a grammatically complex sentence, much less an interesting one. Couldn’t come up with interesting ways to use data (graphs, statistics). Couldn’t accurately use the words she’d memorized and didn’t understand their nuance in reading text

She was a false positive.

I’ve known a lot of high scoring students of every ethnicity over the years–and by high scoring, I mean 1400-1600 on the 1600 SAT, and 2200-2400 on the 10 years with the three tests. 5s on all AP tests, 700+ on all Subject tests. Until that conversation, I would have said kids had high test scores were without exception tremendously impressive kids: usually creative, solid to great writing, opinionated, spotted patterns, knew history, knew the underlying theory of anything that interested them. I could see the difference, I’d say, between these kids and those slightly lower on the score scale–the 1200s, the kids who were well rounded with solid skills who were sometimes as impressive, sometimes not, sometimes a swot, sometimes a bright kid who didn’t see much point in striving.

Every time saying it, though, I’d push back memories of a few kids who’d casually mentioned a 5 score, or a 1600 or 2400, that took me aback. That particular kid who didn’t seem all that remarkable for such a high score. But in all these cases, I was only relying on gut instinct and besides, disappointingly high IQ folks exist.   For every Steven Hawking there’s a Ron Hoeflin. Or a Marilyn vos Savant, telling us whether or not larks are happy.  Surely the test would sometimes capture intellect that just wasn’t there in the creative original ways I looked for. Or hey, maybe some of those kids were stretching the truth.

But here, I had my own experience of her work and her scores were easily confirmable, as my employer kept track (her name was on the “2400 list”, the length of which was another shock to my prior understanding). She got a perfect score despite being a banal teen who couldn’t write or think in ways worthy of that score.

Since that first real awareness, I’ve met other kids with top 1% test scores who are similarly…unimpressive.  98+ percentile SAT scores, eight 5 AP scores, and a 4.5 GPA with no intellectual depth, no ability to make connections, or even to use their knowledge to do anything but pick the correct letter on the multiple choice test or regurgitate the correct answer for a teacher. Some I could confirm their high scores, others I just trusted my gut, now that I’d validated instinct. These are kids with certainly decent brains, but not unusually so.  No shame in that.  But no originality, not even the kind I’d expect from their actual abilities. No interest in anything but achieving high scores, without any interest in what that meant.

It probably won’t come as a shock to learn that all the kids with scores much higher than demonstrated ability were born somewhere in east Asia, that they all spent months and months learning how to take the test, taking practice tests, endlessly prepping.

The inverse doesn’t hold. I know dozens, possibly hundreds, of exceptional Asian immigrants with extraordinary brains and the requisite intellectual depth and heft I would expect from their profile of perfect SAT scores and AP Honors status. But when I am shocked at a test score that is much higher than demonstrated ability, the owner of that score is Chinese or Korean of recent vintage. 

I don’t know whether American kids (of any race) could achieve similar scores if they swotted away endlessly. Maybe some of them are. But my sample size of all races is pretty high, and I’ve not seen it.  On the other hand, I’m certain that very few American kids would find this a worthwhile goal. 

Brief aside: when I taught ELL, I had a kid who was supposedly 18. That’s what his birth certificate said, although there’s a lot of visa fraud in Chinese immigrants, so who knows. He didn’t look a day older than fourteen. And he had very little interest in speaking or learning English. Maybe he was just shy, like Taio, although I’d test him every so often by offering him chocolate or asking him about his beloved bike and he showed no sign of comprehension. But then he’d ace multiple choice reading passages. Without reading the passage. He had no idea what the words meant, but he’d pick the right A, B, or C, every time. I mentioned this to the senior ELL teacher, a Chinese American, and she snorted, “It’s in our genes.”

I don’t think she was kidding but the thing is, I don’t much care how it happens. If American kids are doing this, then it changes not a whit about my unhappiness. It’s not a skill I want to see transferred to the general teenage American population. (That said, the college admissions scandal makes it pretty clear that, as I’ve said many times, rich parents are buying or bribing their way in, not prepping. And unsurprisingly, it appears that Chinese parents were the biggest part of his business.)

Now, before everyone cites data that I probably know better than they do, let me dispatch with the obvious. Many people think test prep doesn’t work at all. That was never my opinion  When people asked me if test prep “worked”, I’d always say the same thing: depends on the kid. “Average score improvement” is a useless metric; some kids don’t improve, some improve a bit, some improve a huge amount. Why not pay to see if your kid improves a lot? But I also felt strongly that test prep couldn’t distort measured ability to beyond actual ability, and I no longer believe that.

But I didn’t believe what critics at the time said, that test prep worked…..too well. I didn’t believe that false positives were a real problem. And the terrible thing is–at least to me–is that I still believe normal test prep is a good thing. Distortion of ability, however, is not.

As the push to de-emphasize tests came, as test-entry high schools came under attack, as colleges turn to grades only–a change I find horrifying–I could no longer join the opposition because the opposition focused their fire almost exclusively on their dismay at the end of meritocracy and the concomitant discrimination against Asian immigrants. I oppose the discrimination, but I no longer really believe the tests we have reliably reveal merit to a granular degree. The changes I want to see in the admissions process would almost certainly reduce Asian headcount not by design, but by acknowledging that specific test scores aren’t as important.

I have other topics I’ve been holding off discussing:

  • why I support an end to test-based high schools in its current form
  • why we still need tests
  • how the SAT changes made all this worse
  • how the emphasis on grades for the past 20 years has exacerbated this insanity
  • why we need to stop using hard work as a proxy for merit

But I needed to try, at least,  to express how my feelings have changed. This is a start. It’s probably badly written, but as you all know, I’ve been trying to write more even if the thoughts aren’t fully baked, so bear with me.

 

 


College Confidential and Brain Dumping the SAT

SAT Scores Delayed for Asian International Students

The above is the official story put out by the Washington Post, which is far more informative than any other outlet I could find. However, Valerie Strauss put some other information in two blog entries:

On Oct. 8, 2014 — days before the Oct. 11 administration of the SAT — the National Center for Fair & Open Testing received an anonymous tip about cheating that included what the sender claimed to be a copy of the December 2013 SAT that was supposedly going to be administered at international sites Oct. 11. This was reported by Bob Schaeffer, public education director of the center, a nonprofit dedicated to ending the abuse of standardized tests commonly known as FairTest. He said FairTest tried to confirm the claims but could not.

According to Schaeffer, SAT tests given at international sites are “almost always” repeats of exams administered previously in the United States but not publicly released.

Students began to think that the October 2014 international version of the SAT was identical to the December 2013 U.S. version by Googling some vocabulary words and passage topics and finding that the 2013 test was the one that came up in discussions threads on “collegeconfidential.com,” according to Schaeffer. It is not yet clear, however, whether the two tests are identical.

I’ll have more to say about the media coverage, but I got distracted by reading up on College Confidential. I’ve always been skeeved by the forum, but that’s because I’m usually researching the test threads which are almost certainly populated by Asians and Asian Americans. No doubt the forums have other purposes; I hear parents frequent them. Little has been written about the forum;the NY Times wrote a feature about it that seems out of date. Quantcast shows that Asians represent 13% of the users, considerably above average. 18-24 is the largest age group, 45-54 is second. So it’s clearly not just used for college tests.

Anyway, I read the college confidential thread, which was opened back in early November for the December test, but from page 4 to page 70 is nothing but brain dumps. The posters make reference to Tiny Chat, a conferencing chat room, and google docs, where they are clearly compiling a list of all the answers. Many posters are putting down all the answers they can remember, in specific detail. One poster lists all the math answers by section (page 57, 58, page 59):

ccmathsatanswers

ccmathsatanswers2

ccmathsatanswers3

A few weeks later, a new thread is opened for the December international test, held on December 7th—and posted so early that the thread date was December 6th (the forums on US time, I assume). In response to the creator’s query, another poster announces that the December international test was a reissue of the June 2012 test, and for good measure gives a table:

JAN 2013- MARCH 2010
MAY 2013- JUNE 2009
JUNE 2013- MARCH 2012
OCTOBER 2013- MARCH 2013
NOVEMBER 2013- JUNE 2011
DECEMBER 2013 JUNE 2012

One thread asked about the December 7 international test

The poster is then sent to the June 12th thread, where again, all the answers are put down. One person (poster name largeblackman. I am deeply skeptical) posts reading section answers.

These are the only two months I checked.

Someone reading this going to say “I did this back when I took the SAT. Chewed over everything I remembered with my friends, worried if we didn’t get the same answers.” Well, no. You didn’t do this. Some of the posters are going into shocking detail. They have question numbers, letter answers. A good chunk of the posters were clearly coordinating the creation of a complete document with all the questions and answers.

They were braindumping, an activity that Microsoft spends a lot of time and energy preventing, but the College Board seems to actively encourage by reusing old tests for international students.

No wonder Asians have such a strong preference for the SAT. The credulous press tends to believe in the super tutors of Asia, but they’re much more likely to be New Oriental “prep” methods revisited. Steal the test, then memorize everything on it. GMAT had similar issues.

Valerie Strauss quotes the head of an international school who caught a cheater: This is certainly organized crime.

I suppose it’s possible that all these posts at College Confidential are just 17-year-olds pranking each other. I find that unlikely. More probably, the posters in question aren’t all 17, but adults who are paid to go in and take the tests while photographing or at least memorizing as much of the test as is possible. Or at the very least, the posters are actual high school students coordinating information illegally. Certainly, someone should at least investigate: ask the owners to provide the IP addresses, actually read the threads, ask the posters to produce the google docs they mention, find the actual names of people who participated.

But universities want the Chinese money, and College Board wants the test fees, and the FBI has to keep watch on Ferguson so that Holder can admonish the grand jury when Darren Wilson isn’t indicted. Who has the time or inclination to investigate a possible organized criminal enterprise that’s corrupting our educational institutions?


SAT Writing Tests–A Brief History

I have a bunch of different posts in the hopper right now, but after starting a mammoth comment on this brand new E. D. Hirsch post (Welcome to blogging, sir!), I decided to convert it to a post—after all, I need the content. (Well, it was brand new when I started this post, anyway.)

Hirsch is making a larger point about Samuel Messick’s concern with consequential validity versus construct validity but he does so using the history of the SAT. In the 80s, says Hirsch, the ETS devised a multiple choice only method of testing writing ability, which was more accurate than an essay test. But writing quality declined, he implies, because students believed that writing wasn’t important. But thanks to Messick, the SAT finally included a writing sample in its 2005 changes.

I have nothing more than a layman’s understanding of construct vs. consequential validity, and Hirsch’s expertise in the many challenges of assessing writing ability is unquestioned, least of all by me. But I know a hell of a lot about the SAT, and what he writes here just didn’t match up with what I knew. I went looking to confirm my knowledge and fill any gaps.

First, a bit of actual SAT writing assessment history:

  • By 1950, the CEEB (College Board’s original name) had introduced the English Composition Achievement Test. The original test had six sections, three multiple choice, three essay (or free response). The CEEB began experimenting with a full 2-hour essay the next year, and discontinued that in 1956. At that point, I believe, the test was changed to 100 question multiple choice only. (Cite for most of this history; here’s a second cite but you need to use the magnifying glass option.)
  • In 1960, the CEEB offered an unscored writing sample to be taken at the testing center, at the universities’ request, which would be sent on to the schools for placement scoring. (I think this was part of the SAT, but can’t be sure. Anyone have a copy of “The Story Behind the First Writing Sample”, by Fred Godshalk?)
  • In 1963, the English Composition Achievement Test was changed to its most enduring form: a 20 minute essay, followed by a 40-minute multiple choice section with 70 questions.
  • In 1968, the CEEB discontinued the unscored writing sample, again at the universities’ request. No one wanted to grade the essays.
  • In 1971, the CEEB discontinued the essay in the ECAT , citing cost concerns.
  • In 1974, the SAT was shortened from 3 hours to 2 hours and 45 minutes, and the Test of Standard Written English was added. The TSWE was multiple choice only, with questions clearly similar to the English Composition Achievement Test. The score is not included in the SAT score, but reported to colleges separately, to be used for placement.
  • In 1976, in response to complaints, the essay version of the ECAT was reinstated. (It may or may not be significant that four years later, the ETS ran its first deficit.) From what I can tell, the ECAT and the TSWE process remained largely unchanged from 1976 through 1994. This research paper shows that the essay was part of the test throughout the 80s.
  • In 1993, all achievement tests were rebranded as SAT II; the English Composition Achievement Test was renamed to the SAT II Writing exam. At some point, the SAT II was shortened from 70 to 60 questions, but I can’t find out when.
  • In 1994 , there were big changes to the SAT: end to antonyms, calculators allowed, free response questions in math. While the College Board had originally intended to add a “free response” to the verbal section (that is, an essay), pressure from the University of California, the SAT’s largest customer, forced it to back down (more on this later). At this time, the TSWE was discontinued. Reports often said that the SAT Writing exam was “new”; I can find no evidence that the transition from the ECAT to the SAT II was anything but seamless.
  • In 1997, the College Board added a writing section to the PSAT that was clearly derived from the TSWE.
  • In 2005, the College Board added a writing section to the SAT. The writing section has three parts: one 25 minute essay and two multiple choice sections for a total of 49 questions. The new writing test uses the same type of questions as the ECAT/SAT II, but the essay prompt is simpler (I can personally attest to this, as I was a Kaplan tutor through the transition).
  • By the way, the ACT never required an essay until 2005, when compliance with UC’s new requirement forced it to add an optional essay.

I’m sure only SAT geeks like me care about this, but either Hirsch is wrong or all my links are wrong or incomplete. First, even with his link, I can’t tell what he’s referring to when he says “ETS devised a test…”. A few sentences before, he places the date as the early 80s. The 80s were the one decade of the past five in which the College Board made no changes to any of its writing tests. So what test is he referring to?

I think Hirsch is referring to the TSWE, which he apparently believes was derived in the early 80s, that it was a unique test, and that the College Board replaced the TSWE with the required essay in 2005. This interpretation of his errors is the only way I can make sense of his explanation.

In that case, not only are his facts wrong, but this example doesn’t support his point. The SAT proper did not test written English for admissions. The TSWE was intended for placement, not admissions. Significantly, the ACT was starting to pick up market share during this time, and the ACT has always had an excellent writing test (multiple choice, no essay). Without the TSWE, the SAT lacked a key element the ACT offered, and saying “Hey, just have your students pay to take this extra test” gave the ACT an even bigger opening. This may just possibly have played into the rationale for the TSWE.

Colleges that wanted an SAT essay test for admissions (as opposed to placement) had won that battle with the English Composition Achievement Test. The CEEB bowed to the pressures of English teachers not in 2005, but in 1963, when it put the essay back into the ECAT despite research showing that essays were unreliable and expensive. After nine years of expense the CEEB believed to be unnecessary, it tried again to do away with the essay, but the same pressures forced it to use the essay on the English Composition Achievement Test/SAT II Writing Test from 1976 to 2005, when the test was technically discontinued, but actually shortened and incorporated into the SAT proper as the SAT Writing test. Any university that felt strongly about using writing for admissions could just require the ECAT. Many schools did, including the University of California, Harvard, Stanford, and most elite schools.

The College Board tried to put an essay into the test back in the 90s, but was stopped not because anyone was concerned about construct or consequential validity, but because its largest customer, the University of California, complained and said it would stop using the SAT if an essay was required. This struck me as odd at first, because, as I mentioned, the University of California has required that all applicants take the English Composition Achievement test since the early 60s. However, I learned in the link that that Achievement Test scores weren’t used as an admissions metric until later in the 90s. In 1994, UC was using affirmative action so wasn’t worried about blacks and Hispanics. Asians, on the other hand, had reason to be worried about an essay test, since UC had already been caught discriminating against them, and UC clearly felt some placation was in order. Later, after the affirmative action ban, UC did a 180 on the essay, requiring that an essay be added to the SAT in 2005.

Why did the College Board want to put an essay in the SAT in 1994, and why did UC change its position 11 years later? My opinion: by then the College Board was getting more efficient at scoring essays, and the ECAT/SAT II Writing wasn’t catching on with any other than elite schools and UC. If the Writing test was rolled into the SAT, the College Board could charge more money. During the 90s we saw the first big push against multiple choice tests in favor of “performance-based assessments” (Hirsch has a whole chapter in one of his books about these misconceptions), giving the College Board a perfect rationale for introducing an essay and charging a lot more money. But UC nixed the essay until 2002, when its list of demands to the College Board called for for removing analogies, quantitative comparisons, and—suddenly—demanding that the writing assessment be rolled into the main SAT (page 15 of the UC link). I can see no reason for this—at that time, UC still required Subject tests, so why couldn’t applicants take the writing test when they took their other two Subject tests? The only reason—and I mean the only reason—I can see for rolling the writing test into the main SAT comes down to profit: the change made the College Board a hell of a lot of money.

Consider: the College Board already had the test, so no development costs beyond dumbing the test down for the entire SAT population (fewer questions, more time for the essay). So a test that only 10% of the testing population paid for could now be sold to 100% of the testing population. The 2005 SAT was both longer (in time) and shorter (in total questions), and a hell of a lot more expensive. Win win.

So UC’s demand gave the College Board cover. Fair’s fair, since UC had no research rationale whatsoever in demanding the end to analogies and quantitative comparisons, changes that would cost the College Board a great deal of money. Everyone knows that California’s ban on affirmative action has made UC very, very unhappy and if I were to assert without foundation that UC hoped and believed that removing the harder elements of the SAT would reduce the achievement gap and enable the university to admit more blacks and Hispanics, well, I’d still get a lot of takers. (Another clue: UC nearly halved the math test burden requirement at the same time—page 16 of the UC link.) (Oh, wait—Still another clue: Seven years later, after weighting the subject tests more heavily than the SAT and threatening to end the SAT requirement altogether, UC ends its use of….the Subject tests. Too many Asians being “very good at figuring out the technical requirements of UC eligibility”.)

So why does any of this matter?

Well, first, I thought it’d be useful to get the history in one place. Who knows, maybe a reporter will use it some day. Hahahahaha. That’s me, laughing.

Then, Hirsch’s assertion that the “newly devised test”, that is, the TSWE, led to a great decline in student writing ability is confusing, since the TSWE began in 1974, and was discontinued twenty years later. So when did the student writing ability decline? I’ve read before now that the seventies, not the eighties, saw writing nearly disappear from the high school curriculum (but certainly Hirsch knows about Applebee, way more than I do). If anything, writing instruction has improved, but capturing national writing ability is a challenge (again, not news to Hirsch). So where’s the evidence that student writing ability declined over the time of the TSWE, which would be 1974-1994? Coupled with the evidence that writing ability has improved since the SAT has achieved “consequential validity”?

Next, Hirsch’s history ignores the ECAT/SAT II Writing test, which offers excellent research opportunities for the impact of consequential validity. Given that UC has required a test with an essay for 50 years, Hirsch’s reasoning implies that California students would have stronger writing curriculum and abilities, given that they faced an essay test. Moreover, any state university that wanted to improve its students’ writing ability could just have required the ECAT/SAT Writing test—yet I believe UC was the only public university system in the country with that requirement. For that matter, several states require all students to take the ACT, but not the essay. Perhaps someone could research whether Illinois and Colorado (ACT required) have a weaker writing curriculum than California.

Another research opportunity might involve a comparison between the College Board’s choices and those driving American College Testing, creator of the ACT and the SAT’s only competition. I could find no evidence that the ACT was subjected to the on-again, off-again travails of the College Board’s English/Writing essay/no essay test. Not once did the College Board point to the ACT and say to all those teachers demanding an essay test, “Hey, these guys don’t have an essay, so why pick on us?” The ACT, from what I can see, never got pressured to offer an essay. This suggests, again, that the reason for all the angst over the years came not from dissatisfaction with the TSWE, but rather the Achievement/SAT II essay test, and the College Board’s varying profit motives over the years.

Finally, Hirsch’s example also assumes that the College Board, universities, high school teachers, and everyone else in 2005 were thinking about consequential or construct validity in adding the essay. I offer again my two unsupported assertions: The College Board made its 1994 and 2005 changes for business reasons. The UC opposed the change in 1994 and demanded it in 2005 for ideological reasons, to satisfy one of its various identity groups. Want to argue with me? No problem. Find me some evidence that UC was interested in anything other than broadening its admissions demographic profile in the face of an affirmative action ban, and any evidence that the College Board made the 2005 changes for any other reason than placating UC. Otherwise, the cynic’s view wins.

On some later date, I’ll write up my objections to the notion that the essay test has anything to do with writing ability, but they pulled the focus so I yanked them from this post.

By the way, I have never once met a teacher, except me, who gives a damn about helping his or her students prepare for the SAT. Where are these teachers? Can we take a survey?

Every so often, I wonder why I spend hours looking up data to refute a fairly minor point that no one really cares about in the first place and yes, this is one of those times. But dammit, I want things like this to matter. I don’t question Hirsch’s goals and agree with most of them. But I am bothered by the simplification or complete erasure of history in testing, and Hirsch, of all people, should value content knowledge.

Yeah, I did say “brief”, didn’t I? Sorry.


SAT Prep for the Ultra-Rich, And Everyone Else

Whenever I read about SAT tutors charging in the hundreds of dollars, I’m curious. I know they exist, but I also know that I’m pretty damn good, and I’m not charging three figures per hour (close, though!). So I always read them closely to see if, in fact, these test prep tutors are super fab in a way that I’m not.

At the heart of all test prep stories lies the reporter’s implicit rebuke: See what rich people are doing for their kids? See the disadvantage that the regular folks operate under? You can’t afford those rates! You’re stuck with Kaplan or cheaper, cut-rate tutors! And that’s if you’re white. Blacks and Hispanics can’t even get that much. Privilege. It sucks.

And so the emphasis on the cost of the tutors, rather than any clear-eyed assessment of what, exactly, these tutors are doing that justifies an hourly rate usually reserved for low-end lawyers, never mind the fact that these stories are always about the SAT, when in fact the ACT is taken by as many kids as the SAT. The stories serve up propaganda more than they provide an accurate picture of test prep.

I’ve written before about the persistence of test prep delusions. Reality, summarized: blacks and Hispanics use test prep more than whites, Asians use it more than anyone. Rich parents are better off buying their kids’ way into college than obsessing about the last few points. Test prep doesn’t artificially inflate ability.

So what, in fact, is the difference between Lisa Rattray, test prep coach charging $300/hour; me, charging just short of 3 figures; and a class at Kaplan/Princeton/other SAT test prep schools?

Nothing much. Test prep coaches can work for a company or on their own. The only difference is their own preferences for customer acquisition. Tutors and instructors with a low risk tolerance just sign on with a company. Independent operators, comfortable with generating their own business, then pick their markets based on their own tolerance. My customers sit comfortably in the high income bracket, say $500K to $5 million yearly income, although I’ve worked with a couple Fortune 500 families. Lisa Rattray and Joshua Brown, the featured tutors, clearly work with families a couple notches up the income ladder from mine.

None of this has anything to do with quality of instruction. Test prep is a sales and marketing game. The research is clear: most kids improve at least a little, quite a few kids improve a lot, a very few kids stay put or, heaven forfend, get worse.

Obviously, instructor quality influences results a bit, but only rarely change a kid from one category (mild improvement) to another (major improvement). Remember, all test prep instructors have high test scores, and they’re all excellent at understanding how the test works. So they make career decisions based on their tolerance for sales and marketing, not the quality of their services. I know of some amazingly god-awful tutors who charge more than I do, having learned of them from their furious ex-clients who assumed a relationship between price and quality. These tutors have websites, business cards, offered their own prepared test materials, saw students in their rented space, and often accepted credit card deposits. I have none of these accoutrements, show up at my clients’ houses, usually but not always on time, and take checks. Every so often I get a client who whips out a wad of bills and pays me $500 in cash, which I find a tad unnerving.

I’m just as good now as I was at Kaplan (in fact, I privately tutored my own students while at Kaplan, tutoring theirs), but I only got paid $24/hour for Kaplan work, which charged about $125/hour for my services. Kaplan will (at least, when I worked there) boost a teacher’s hourly rate to $50/hour if they get 80% or more “perfect” customer ratings. Instructors who convinced their students that to respond to the online survey and give them excellent ratings got more money. This is independent of actual improvement. A customer who doesn’t improve at all but felt reassured and valued by her instructor could give straight 5s (or 1s, whatever the highest rating is). A customer who sees a 300 point improvement might not fill in the survey at all. Their research showed that customers who give their instructors perfect ratings gave awesome word of mouth and that was worth rewarding. Nothing else was. Asian cram schools pay instructors based on the students who sign up, with a premium for those who sign up specifically for that instructor. See? Sales and marketing.

Test prep companies, long castigated as the luxury option of the wealthy, have been the first choice of the middle class for a decade or more. For the reasons I’ve outlined, any parent can find excellent instructors in all the test prep companies: Kaplan, Princeton Review, Asian cram schools. They won’t brag about it, though, because these companies are about the brand. Kaplan doesn’t want word getting out that Joe Dokes is a great Kaplan instructor; it wants everyone to be happy with Kaplan. No one is “Princeton Review’s star tutor” for very long, because Princeton doesn’t like it and at that point, the most risk-averse instructor probably has enough word of mouth fame to go independent.

I’ve often advised my students to consider a class. The structure helps. Some of my kids don’t do any work unless I’m there, so what I end up doing is sitting there playing Spider on my android on my client’s dime while the kid works problems, rather than reviewing a bunch of work to move forward. I’m pretty sure Lisa and Joshua would celebrate this, going to the parent and pointing out how much they are helping. I have better things to do and other clients to see. So I tell the parents to fork out an extra thousand for a class, make sure the kid goes, and then we review the completed work. The student gets more hours, more focus and, usually, higher scores, regardless of the quality of the second instructor.

I’m not saying Lisa and Joshua are wrong, mercenary, or irresponsible. They just play to a different clientele, and a huge chunk of their ability to do so rests on their desire to sell an image. That’s fine. That’s just not me. Besides, Josh forks out $15K of his profit for a rental each summer. Lisa gets constant text messages from anxious parents. Also not me.

So you’re a white, middle class or higher parent with a teenager, worried about SAT scores. What do you do? Here are some guidelines. Recognize that GPA or parental income smacks down test scores without breaking a sweat. If Johnny doesn’t have a GPA of 3.8 or higher, elite universities are out of the question unless his parents are alumni or rich/connected enough to make it worth the school’s while.

If Sally qualifies on GPA, has a top-tier transcript (5 or more AP classes) and wants to go to a top 10 school, test scores should be 700 or higher per section. If they’re at that point, don’t waste your time or money or stress. At that point, the deciding factors aren’t scores but other intangibles, including the possibility that the admissions directors toss a pile of applications in the air and see which ones travel the farthest.

If Jesse is looking for a top 20 or 30 school, the GPA/transcript requirements are the same, but looking at the CDS of these schools, realistically a 650 or higher per section will do the trick. It might be worth boosting the test scores to low 700s, but if Jesse is a terrible tester, then don’t break the bank. One of the schools will probably come through.

If Sammy has a lower GPA (3.3 to 3.8) but excellent test scores (high 600s or higher per section) , then look to the schools in the middle–say, from 40 to 60. It’s actually worth spending money to maximize Sammy’s scores, because these mid-tier schools often get a lot of high effort hard workers with mediocre test scores. Not only will Sammy look good, but he might get some money. (By the way, if you’ve got a Sammy whose grades are much lower than his abilities, you should still push him into the hardest classes, even if he and the counsellors cavil. If your Sammy is like most of them, he’s going to get Bs and Cs regardless, so he may as well get them in AP classes and get some college credit from the AP tests. And the transcript will signal better, as well.)

The biggest bang for the test prep buck lies not in making kids competitive for admissions, but to help them test out of remediation at local universities. So if Austin has a 3.0 GPA, works hard but tests poorly, then find out the SAT cut score at his university. If he’s not above that point, then spend the money to get him there, and emphasize the importance of this effort to his college goals.

If your kid is already testing at 650 or higher, either send her to an Asian cram school (they will be the only white kid there, for the most part, but the instruction will be excellent) or invest in a tutor. The average white kid class at Kaplan or Princeton might have an instructor who can finetune for their issues, but probably won’t.

Otherwise, start with a class and supplement with a tutor if you can afford it. Ask around for good instructors, or ask the test prep company how long the instructor has been teaching. Turnover in test prep instructors is something like 75%; the 25% who stay long term do so because they’re good. As for the tutor, I hope I’ve convinced everyone that price isn’t an issue in determining quality. I would ask around for someone like me, because our ability to get a high rate without the sales and marketing suggests we must be, in fact, pretty good. And there’s always someone like me around. Otherwise, I’d go with the private tutoring options at a test prep company, with interviews.

As I said, these rules are for middle class or higher white kids. Only 6% of blacks and Hispanics get above 600 on any section of the SAT–in fact, the emphasis on GPA came about in large part to bypass the unpleasant reality of the score gap. There are only around 300 black students that get higher than 700 on two sections of the SAT. That’s barely enough blacks for one top ten school. Rules are very different. The main reason for blacks and Hispanics to take test prep is to get their scores above the remediation number. Middle class or higher Asians face much higher standards because universities know their (or their parents’) dedication to getting good grades and good test scores is more than a tad unnatural and probably overstates their value to the campus. Athletes and artists of note play by different rules. Poor whites and poor Asians have it really, really tough.

What this means, of course, is that the kids in the Hamptons are probably already scoring 700 or higher per section and are, consequently, wasting their time. But what the hell, they’re doing the economy some good. Or maybe some of them are Asian.

Note: I wrote this focusing on the SAT but it all applies to the ACT as well, and the ACT is a much better test. I wrote about the ACT here.


What’s the difference between the SAT and the ACT?

I couldn’t find anything terribly wrong with this Ed Week article. But it didn’t offer anything terribly useful, either,so I thought I’d offer up some facts that might do some good.

Historically, the ACT was the test for the Midwest and South, and the SAT was the test for the coasts, but after the 2005 SAT changes, the ACT’s test population caught up. Both tests are given to around 1.6 million students.

Test Content

The ACT tests the same fact base as the SAT. It’s about 20 minutes shorter than the SAT, although it has far more questions and four sections:”

  • English: 45 minutes, 5 passages of 15 questions.
  • Math: 60 minutes, 60 questions.
  • Reading: 35 minutes, 4 passages of 10 questions.
  • Science: 35 minutes, 7 passages of 4-8 questions (40 total).

The ACT section times are brutal, which is why the ACT benchmarks purporting to report on college readiness should be taken with a healthy dose of salt. In my view, they dramatically underreport the reading, science, and (to a lesser extent) math ability of the lower to mid-range “college” students (keeping in mind that these kids shouldn’t be in college anyway, but that’s a different story).

Each section is scored on a scale of 1-36. The sections are then averaged for a Composite score, which is every bit as useless, really, as the SAT total. Colleges use the section scores far more than is generally known for placement in or out of remediation.

How do you convert ACT scores to SAT?

The University of California used to offer a direct conversion. One sign of the ACT’s growing popularity is that both tests are now converted to a “UC score”.
Roughly, a 21 on any section is the ability equivalent of a 500 on the SAT, a 26 is a 600, and a 31 a 700. However, a one to one combination isn’t possible, with 4 ACT sections and 3 SAT sections.

The UC conversion adds two-thirds of the math/reading/science total to the English/writing combined score. This weights the converted score towards English–rather unfairly, in my view, but not enough to do serious damage.

Which is more closely aligned to school curriculum

Both test knowledge and abilities that students should have mastered in school; the ACT doesn’t directly test science, but content knowledge will make the questions more familiar. The ACT also tests slightly more math: trigonometry, analytic geometry (circle and ellipse equations), and the occasional matrix question. Neither tests specific content knowledge in history, science, or English; for some reason, people say the ACT does. They are wrong.

Which test should students take?

Most students will score in roughly the same percentile on each test. However, some students have strong preferences for the ACT.

Low to mid-tier students are almost always better off with the ACT, something that I wish more do-gooder organizations understood. Much of the SAT’s difficulty is front-loaded–a big challenge in many questions is simply figuring out what the question is. The ACT actually tests more material but its questions are more straightforward. Any student who prefers the concrete to the abstract should consider the ACT, and most low to mid ability students will have a preference for the concrete. However, see the caveat below regarding reading abilities.

Students with SAT section scores in the high 600s/low 700s should always check out the ACT. The 2005 SAT changes reduced the number of questions in each section by 10%, and the cuts were primarily from the higher-difficulty questions. Many students in the mentioned range are every bit as bright as those getting 760+ scores, but are less detail-oriented, and usually make a few unforced errors. They used to make up the difference with their performance on the really difficult problems. Fewer difficult problems, slightly lower scores. (I am nearly certain that the reduced number of questions caused the decline noted when the SAT was changed in 2005.)

The ACT has far more questions than the SAT–215 to 171–and has no “guessing penalty”, which gives high ability students who make the occasional unforced error a significant advantage. To give an example: my son took the old SAT as an early junior and got 690 M, 660 V. I expected him to get high 600s, low 700s on the new one, which he took in March 2005. He received 630s across the board. After working on his accuracy, he took it again and received a 690,690, 670, or 2050.

His ACT scores were English 34, Math 34, Reading 36 (a perfect score), Science 29, which in SAT terms is high 700s across the board, or a 2250 using the UC conversion. At his performance level, that’s a huge boost. I have other anecdotal evidence, but they aren’t my kids so I can’t discuss specifics. Without question, all high ability kids should take both to see if they have a preference.

If taking both, which prep class should I take?

High ability students: take the SAT prep course. First, there are exponentially more SAT classes than ACT, even now. Asians, the primary consumers of test prep courses, don’t seem to take the ACT much (at least around here). Another major consumer, schools offering classes for their own students, also seem ignorant of the ACT.

Moreover, moving from the SAT to the ACT is far more organic than the other way round; the SAT has far more tricks and tidbits that a good test prep teacher can help with. Practicing for the ACT is little more than learning how to work fifty times faster on everything or, if that’s not possible, devising a strategy for getting as much done as possible. Did I mention the brutal timing requirements of the ACT? Oh, well, it bears repeating.

Low to mid-ability students: anyone planning a class aimed to low income, low ability students should select the ACT. Students with weaker abilities will receive more useful instruction, as it has fewer test-specific tricks and the test prep instructor will spend more time on content.

Who Shouldn’t Take the ACT?

The ACT is reading intensive–three of the four tests involve reading comprehension and two of those sections have (here it is for the third time) brutal time requirements. Students whose reading skills are significantly out of alignment with their other abilities (e.g., dyslexia, reading LDs), may want to stick with the SAT.