Tag Archives: College Board

The Challenge of Black Students and Advanced Placement

When the bell rings at Wheaton North High School, a river of white students flows into Advanced Placement classrooms. A trickle of brown and black students joins them. —The Challenge of Creating Schools That Work for Everybody, Catherine Gewertz

Gewertz’s piece is one of a million or so outlining the earnest efforts of suburban schools to increase their  black and Hispanic student representation in AP classes. And indeed, these efforts are real and neverending. I have been in two separate schools that have been mandated in no uncertain terms to get numbers up.

But the data does not suggest overrepresentation. I’m going to focus on African American representation for a few reasons. Until recently, the College Board split up Hispanic scores into three categories, none of them useful, and it’s a real hassle to combine them. Moreover, the Hispanic category has an ace in the hole known as the Spanish Language test. Whenever you see someone boasting of great Hispanic AP scores, ask how well they did in non-language courses. (Foreign language study has largely disappeared as a competitive endeavor in the US. It’s just a way for Hispanic students to get one good test score, and Chinese students to add one to their arsenal.)

College Board data goes back twenty years, so I built a simple table:

blkaptable

I eliminated foreign language tests and those that didn’t exist back in 1997. It’s pretty obvious from the table that the mean scores for each test have declined in almost every case:

blkapmeanscorechg

Enter a caption

While the population for each test has increased, it’s been lopsided.

blkapgrowthbytest

It’s not hard to see the pattern behind the increases. The high-growth courses are one-offs with no prerequisites. It’s hard to convince kids to take these courses year after year–even harder to convince suburban teachers to lower their standards for that long. So put the kids in US History, Government–hey, it’s short, too!– and Statistics, which technically requires Algebra II, but not really.

The next three show data that isn’t often compiled for witnesses. I’m not good at presenting data, so there might be better means of presenting this. But the message is clear enough.

First,  here’s the breakdown behind the test growth. I took the growth in each score category (5 high, 1 low) and determined its percentage of the overall growth.

blkapscoredistributiongrowth

See all that blue? Most of the growth has been taken up by students getting the lowest possible score. Across the academic test spectrum, black student growth in 5s and 4s is anemic compared to the robust explosion of  failing 1s and 2s. Unsurprisingly, the tests that require a two to three year commitment have the best performace. Calc AB has real growth in high scores–but, alas, even bigger growth in low scores. Calc BC is the strongest performance. English Lang & Comp has something approaching a normal distribution of scores, even.

Here you can see the total scores by test and category. Calc BC and European History, two of the tests with the smallest growth, have the best distributions. Only four tests have the most scores in the 1 category; most have 2 as their modal score.

blkap1997

The same chart in 2016 is pretty brutally slanted. Eight tests now fail most students with a one, just four have a two. Worst is the dramatic drop in threes. In 1997, test percentages with 3 scores ranged from 10-38%. In 2016, they range from 10-20%. Meanwhile, the 4s and 5s are all well below 10%, with the cheery exception of Calculus BC.

blkap2016

Jay Mathews’ relentless and generally harmful push of Advanced Placement has been going strong since the 80s, even if the  Challenge Index only began in 1998. So 1997’s result include a decade of “AP push”. But the last 20 years have been even worse, as Jay, Newsweek, and the Washington Post all hawked the Index as a quality signifier: America’s Best High Schools! Suddenly, low-achieving, high-minority students had a way to bring some pride to their schools–just put their kids in AP classes.

As I wrote a couple years ago, this effort wasn’t evenly distributed. High achieving, diverse suburban high schools couldn’t just dump uninterested, low-achieving students (of any race) into a class filled with actually qualified students (of any race). Low achieving schools, on the other hand, had nothing to lose. Just dub a class “Advanced Placement” and put some kids in it. Most states cover AP costs, often using federal Title I dollars, so it’s a cheap way to get some air time.

African American AP test scores don’t represent a homogeneous population, and you can see that in the numbers.  Black students genuinely committed to academic achievement in a school with equally committed peers and qualified teachers are probably best reflected in the Calculus BC scores, as BC requires about four years of successful math. Black students dumped in APUSH and AP Government  are the recourse of diverse suburban schools not rich enough to ignore bureaucratic pressure to up their AP diversity.  They are taking promising students with low motivation and putting them in AP classes. This annoys the hell out of the parents and kids who genuinely want the rigorous course, and quite often angers the “promising” students, who are known to fail the class and refuse to take the test. The explosion of 1s across the board comes from the low-achieving urban schools who want to make the Challenge Index and don’t have any need to keep the standards high.

Remember each test costs $85 and test fees are waived by taxpayers for students who can’t afford them.  Consider all the students being forced, in many cases, to take classes they have no interest in.  Those smaller increases in passing scores are purchased with considerable wasted time and taxpayer expense.

But none of this should be news. Let’s talk about the real challenge of black students and AP scores and methods to fix the abuses.

First, schools and students should be actively restricted from using the AP grade “boost” for fraudulent purposes. The grades should be linked to the test scores without exception. Students who receive 4s and 5s get an A, even if the teacher wants to give a B1. Students who get a 3 receive a B, even if the teacher wants to give an A2 . Students who get a 2 receive a C. Students who get a 1 or who don’t take the test get a D–which, remember, will be bumped to a C for GPA purposes. This sort of grade link, first suggested by Saul Geiser (although I’ve extended it to the actual high school grade) would dramatically reduce abuse not only by predominantly minority schools, but also by all students  gaming the AP system to get inflated GPAs. That should reduce a lot of the blue in this picture:

blkapscoredistributiongrowth

Then we should ask a simple question: how can we bump those yellows to greys? That is, how can we get the students who demonstrated enough competence to score a 2 on the AP test to get enough motivation and learning to score a 3?

I’ve worked in test prep for years with underachieving blacks and Hispanics, and now teaching a lot of the kids not strong enough or not motivated enough to take AP classes. My school is under a great deal of pressure to get more low income, under-represented minorities in these classes as well (and my school administration is entirely non-white, as a data point). A couple years ago, I taught a US History course that resulted in four kids being “tagged” for an advanced placement class the next year–that is, they did so well in my class, having previously shown no talent or motivation, that they were put in AP Government the next year. I kept in touch with one, who  got an A in the class and passed the test.

My advice to my own principal, which I would repeat to the principal in Gewertz’s piece, is to create a class full of the promising but unmotivated students, separate from the motivated students. Give them a teacher who will be rigorous but low key, who won’t give much homework, who will focus on skill improvement in class. (ahem. I’m raising my hand.) Focus on getting the kids to pass the test. If they pass, they will get a guaranteed B in the class, which will count as an A for GPA purposes. (Even if the College Board doesn’t change the rules, schools can guarantee this policy.)

This strategy would work for advanced placement classes in English, history, government, probably economics.  It could work for statistics. Getting unmotivated kids to pass AP Calculus may be more difficult, as it would involve using the strategy consistently for 3 years with no test to guarantee a grade.

The challenge of increasing the abilities and college-readiness of promising but not strongly motivated students (of any race) lies in understanding their motives. Teachers need to give their first loyalty to the students, not the content. Traditional AP teachers are reluctant to do this, and I don’t think they should be required to change. But traditional AP teachers are, perhaps, not the best teachers for this endeavor.

In order for this proposal to get any serious attention, however, reporters would have to stop pretending that talented black students aren’t taking AP courses. The data simply doesn’t support that charge. We are putting too many black students into AP courses. Too many of them are completely unfit, have remedial level skills that high schools aren’t allowed to address. Much of the growth of Advanced Placement has relied on this fraud–and again, not just for black students.

It’s what we do with the kids in the middle, the skeptics, the uncertain ones, the ones who dearly want to be proven wrong about their own skills, that will help us improve these dismal statistics.

1I can’t even begin to tell you how many teachers in suburban districts do this.
2The same teachers who give students with 4s and 5s Bs are also prone to giving As to kids who got 3s. But of course, this is also the habit of teachers in low achieving urban districts. Consider this 2006 story celebrating the first two kids ever to pass the AP English test, and wonder how many of the students got As notwithstanding.


Evaluating the New PSAT: Math

Well, after the high drama of writing, the math section is pretty tame. Except the whole oh, my god, are they serious? part. Caveat: I’m assuming that the SAT is still a harder version of the PSAT, and that this is a representative test.

Metric

Old SAT

Old PSAT

ACT

New PSAT
Questions
 

54 
44 MC, 10 grid

38 
28 MC, 10 grid

60 MC 
 

48 
40 MC, 8 grid

Sections
 
 

1: 20 q, 25 m 
2: 18 q, 25 m 
3: 16 q, 20 m

1: 20 q, 25 m 
2: 18 q, 25 m
 

1: 60 q, 60 m 
 
 

NC: 17 q, 25 m 
Calc: 31 q, 45 m
 
MPQ
 
 

1: 1.25 mpq 
2: 1.38 mpq
3: 1.25 mpq

1: 1.25 mpq 
2: 1.38 mpq
 

1 mpq 
 
 

NC: 1.47 mpq 
Calc: 1.45 mpq
 
Category 
 
 
 
 
 
 

Number Operations 
Algebra & Functions
Geometry & Measurement
Data & Statistics
 
 
 

Same  
 
 
 
 
 
 

Pre-algebra 
Algebra
elem & intermed.
Geometry
coord & plane
Trigonometry
 
 
1) Heart of Algebra 
2) Passport to
Advanced Math
3) Probability &
4) Data Analysis
Additional Topics
in math
 

It’s going to take me a while to fully process the math section. For my first go-round, I thought I’d point out the instant takeaways, and then discuss the math questions that are going to make any SAT expert sit up and take notice.

Format
The SAT and PSAT always gave an average of 1.25 minutes for multiple choice question sections. On the 18 question section that has 10 grid-ins, giving 1.25 minutes for the 8 multiple choice questions leaves 1.5 minutes for each grid in.

That same conversion doesn’t work on the new PSAT. However, both sections have exactly 4 grid-ins, which makes a nifty linear system. Here you go, boys and girls, check my work.

The math section that doesn’t allow a calculator has 13 multiple choice questions and 4 grid-ins, and a time limit of 25 minutes. The calculator math section has 27 multiple choice questions and 4 grid-ins, and a time limit of 45 minutes.

13x + 4y = 1500
27x + 4y = 2700

Flip them around and subtract for
14x = 1200
x = 85.714 seconds, or 1.42857 minutes. Let’s round it up to 14.3
y = 96.428 seconds, or 1.607 minutes, which I shall round down to 1.6 minutes.

If–and this is a big if–the test is using a fixed average time for multiple choice and another for grid-ins, then each multiple choice question is getting a 14.4% boost in time, and each grid-in a 7% boost. But the test may be using an entirely different parameter.

Question Organization

In the old SAT and ACT, the questions move from easier to more difficult. The SAT and PSAT difficulty level resets for the grid-in questions. The new PSAT does not organize the problems by difficulty. Easy problems (there are only 4) are more likely to be at the beginning, but they are interlaced with medium difficulty problems. I saw only two Hard problems in the non-calculator section, both near but not at the end. The Hard problems in the calculator section are tossed throughout the second half, with the first one showing up at 15. However, the coding is inexplicable, as I’ll discuss later.

As nearly everyone has mentioned, any evaluation of the questions in the new test doesn’t lead to an easy distinction between “no calc” and “calc”. I didn’t use a calculator more than two or three times at any point in the test. However, the College Board may have knowledge about what questions kids can game with a good calculator. I know that the SAT Math 2c test is a fifteen minute endeavor if you get a series of TI-84 programs. (Note: Not a 15 minute endeavor to get the programs, but a 15 minute endeavor to take the test. And get an 800. Which is my theory as to why the results are so skewed towards 800.) So there may be a good organizing principle behind this breakdown.

That said, I’m doubtful. The only trig question on the test is categorized as “hard”. But the question is simplicity itself if the student knows any right triangle trigonometry, which is taught in geometry. But for students who don’t know any trigonometry, will a calculator help? If the answer is “no”, then why is it in this section? Worse, what if the answer is “yes”? Do not underestimate the ability of people who turned the Math 2c into a 15 minute plug and play to come up with programs to automate checks for this sort of thing.

Categories

Geometry has disappeared. Not just from the categories, either. The geometry formula box has been expanded considerably.

There are only three plane geometry questions on the test. One was actually an algebra question using the perimeter formula Another is a variation question using a trapezoid’s area. Interestingly, neither rectangle perimeter nor trapezoid formula were provided. (To reinforce an earlier point, both of these questions were in the calculator section. I don’t know why; they’re both pure algebra.)

The last geometry question really involves ratios; I simply picked the multiple choice answer that had 7 as a factor.

I could only find one coordinate geometry question, barely. Most of the other xy plane questions were analytic geometry, rather than the basic skills that you usually see regarding midpoint and distance–both of which were completely absent. Nothing on the Pythagorean Theorem, either. Freaky deaky weird.

When I wrote about the Common Core math standards, I mentioned that most of geometry had been pushed down into seventh and eighth grade. In theory, anyway. Apparently the College Board thinks that testing geometry will be too basic for a test on college-level math? Don’t know.

Don’t you love the categories? You can see which ones the makers cared about. Heart of Algebra. Passport to Advanced Math! Meanwhile, geometry and the one trig question are stuck under “Additional Topic in Math”. As opposed to the “Additional Topic in History”, I guess.

Degree of Difficulty;

I worked the new PSAT test while sitting at a Starbucks. Missed three on the no-calculator section, but two of them were careless errors due to clatter and haste. In one case I flipped a negative in a problem I didn’t even bother to write down, in the other I missed a unit conversion (have I mentioned before how measurement issues are the obsessions of petty little minds?)

The one I actually missed was a function notation problem. I’m not fully versed in function algebra and I hadn’t really thought this one through. I think I’ve seen it before on the SAT Math 2c test, which I haven’t looked at in years. Takeaway— if I’m weak on that, so are a lot of kids. I didn’t miss any on the calculator section, and I rarely used a calculator.

But oh, my lord, the problems. They aren’t just difficult. The original, pre-2005 SAT had a lot of tough questions. But those questions relied on logic and intelligence—that is, they sought out aptitude. So a classic “diamond in the rough” who hadn’t had access to advanced math could still score quite well. Meanwhile, on both the pre and post 2005 tests, kids who weren’t terribly advanced in either ability or transcript faced a test that had plenty of familiar material, with or without coaching, because the bulk of the test is arithmetic, algebra I, and geometry.

The new PSAT and, presumably, the SAT, is impossible to do unless the student has taken and understood two years of algebra. Some will push back and say oh, don’t be silly, all the linear systems work is covered in algebra I. Yeah, but kids don’t really get it then. Not even many of the top students. You need two years of algebra even as a strong student, to be able to work these problems with the speed and confidence needed to get most of these answers in the time required.

And this is the PSAT, a test that students take at the beginning of their junior year (or sophomore, in many schools), so the College Board has created a test with material that most students won’t have covered by the time they are expected to take the test. As I mentioned earlier, California alone has nearly a quarter of a million sophomores and juniors in algebra and geometry. Will the new PSAT or the SAT be able to accurately assess their actual math knowledge?

Key point: The SAT and the ACT’s ability to reflect a full range of abilities is an unacknowledged attribute of these tests. Many colleges use these tests as placement proxies, including many, if not most or all, of the public university systems.

The difficulty level I see in this new PSAT makes me wonder what the hell the organization is up to. How can the test will reveal anything meaningful about kids who a) haven’t yet taken algebra 2 or b) have taken algebra 2 but didn’t really understand it? And if David Coleman’s answer is “Those testers aren’t ready for college so they shouldn’t be taking the test” then I have deep doubts that David Coleman understands the market for college admissions tests.

Of course, it’s also possible that the SAT will yield the same range of scores and abilities despite being considerably harder. I don’t do psychometrics.

Examples:

newpsatmath10

Here’s the function question I missed. I think I get it now. I don’t generally cover this degree of complexity in Precalc, much less algebra 2. I suspect this type of question will be the sort covered in new SAT test prep courses.

mathnocalcquads

These two are fairly complicated quadratic questions. The question on the left reveals that the SAT is moving into new territory; previously, SAT never expected testers to factor a quadratic unless a=1. Notice too how it uses the term “divisible by x” rather than the more common term, “x is a factor”. While all students know that “2 is a factor of 6” is the same as “6 is divisible by 2”, it’s not a completely intuitive leap to think of variable factors in the same way. That’s why we cover the concept–usually in late algebra 2, but much more likely in pre-calc. That’s when synthetic division/substitution is covered–as I write in that piece, I’m considered unusual for introducing “division” of this form so early in the math cycle.

The question on the right is a harder version of an SAT classic misdirection. The test question doesn’t appear to give enough information, until you realize it’s not asking you to identify the equation and solve for a, b, and c–just plug in the point and yield a new relationship between the variables. But these questions always used to show up in linear equations, not quadratics.

That’s the big news: the new PSAT is pushing quadratic fluency in a big way.

Here, the student is expected to find the factors of 1890:

newpsatperimeter

This is a quadratic system. I don’t usually teach these until Pre-Calc, but then my algebra 2 classes are basically algebra one on steroids. I’m not alone in this.

No doubt there’s a way to game this problem with the answer choices that I’m missing, but to solve this in the forward fashion you either have to use the quadratic formula or, as I said, find all the factors of 1890, which is exactly what the answer document suggests. I know of no standardized test that requires knowledge of the quadratic formula. The old school GRE never did; the new one might (I don’t coach it anymore). The GMAT does not require knowledge of the quadratic formula. It’s possible that the CATs push a quadratic formula question to differentiate at the 800 level, but I’ve never heard of it. The ACT has not ever required knowledge of the quadratic formula. I’ve taught for Kaplan and other test prep companies, and the quadratic formula is not covered in most test prep curricula.

Here’s one of the inexplicable difficulty codings I mentioned–this is coded as of Medium difficulty.

As big a deal as that is, this one’s even more of a shock: a quadratic and linear system.

newpsatsystemlineparabola

The answer document suggests putting the quadratic into vertex form, then plugging in the point and solving for a. I solved it with a linear system. Either way, after solving the quadratic you find the equation of the line and set them equal to each other to solve. I am….stunned. Notice it’s not a multiple choice question, so no plug and play.

Then, a negative 16 problem–except it uses meters, not feet. That’s just plain mean.
newpsatmathneg16

Notice that the problem gives three complicated equations. However, those who know the basic algorithm (h(t)=-4.9t2 + v0 + s0) can completely ignore the equations and solve a fairly easy problem. Those who don’t know the basic algorithm will have to figure out how to coordinate the equations to solve the problem, which is much more difficult. So this problem represents dramatically different levels of difficulty based on whether or not the student has been taught the algorithm. And in that case, the problem is quite straightforward, so should be coded as of Medium difficulty. But no, it’s tagged as Hard. As is this extremely simple graph interpretation problem. I’m confused.

Recall: if the College Board keeps the traditional practice, the SAT will be more difficult.

So this piece is long enough. I have some thoughts–rather, questions–on what on earth the College Board’s intentions are, but that’s for another test.

tl;dr Testers will get a little more time to work much harder problems. Geometry has disappeared almost entirely. Quadratics beefed up to the point of requiring a steroids test. Inexplicable “calc/no calc” categorization. College Board didn’t rip off the ACT math section. If the new PSAT is any indication, I do not see how the SAT can be used by the same population for the same purpose unless the CB does very clever things with the grading scale.


Evaluating the New PSAT: Reading and Writing

The College Board has released a new practice PSAT, which gives us a lot of info on the new SAT. This essay focuses on the reading and writing sections.

As I predicted in my essay on the SAT’s competitive advantage, the College Board has released a test that has much in common with the ACT. I did not predict that the homage would go so far as test plagiarism.

This is a pretty technical piece, but not in the psychometric sense. I’m writing this as a long-time coach of the SAT and, more importantly, the ACT, trying to convey the changes as I see them from that viewpoint.

For comparison, I used these two sample ACT, this practice SAT (old version), and this old PSAT.

Reading

The old SAT had a reading word count of about 2800 words, broken up into eight passages. Four passages were very short, just 100 words each. The longest was 800 words. The PSAT reading count was around 2000 words in six passages. This word count is reading passages only; the SAT has 19 sentence completions to the PSAT’s 13.

So SAT testers had 70 minutes to complete 19 sentence completions and 47 questions over eight passages of 2800 words total. PSAT testers had 50 minutes to complete 13 sentence and 27 questions over six passages of 2000 words total.

The ACT has always had 4 passages averaging 750 words, giving the tester 35 minutes to complete 40 questions (ten for each passage). No sentence completions.

Comparisons are difficult, but if you figure about 45 seconds per sentence completion, you can deduct that from the total time and come up with two rough metrics comparing reading passages only: minutes per question and words per question (on average, how many words is the tester reading to answer the questions).

Metric

Old SAT

Old PSAT

ACT

New PSAT
Word Count

2800

2000

3000

3200
Passage Count

8

6

4

5
Passage Length

100-850

100-850

750

500-800
MPQ

1.18

1.49

1.14

1.27
WPQ

59.57

74.07

75

69.21

I’ve read a lot of assertions that the new SAT reading text is more complex, but my brief Lexile analysis on random passages in the same category (humanities, science) showed the same range of difficulty and sentence lengths for old SAT, current ACT, and old and new PSAT. Someone with more time and tools than I have should do an indepth analysis.

Question types are much the same as the old format: inference, function, vocabulary in context, main idea. The new PSAT requires the occasional figure analysis, which the College Board will undoubtedly flaunt as unprecedented. However, the College Board doesn’t have an entire Science section, which is where the ACT assesses a reader’s ability to evaluate data and text.

Sentence completions are gone, completely. In passage length and overall reading demands, the new PSAT is remarkably similar in structure and word length to the ACT. This suggests that the SAT is going to be even longer? I don’t see how, given the time constraints.

tl;dr: The new PSAT reading section looks very similar to the current ACT reading test in structure and reading demands. The paired passage and the questions types are the only holdover from the old SAT/PSAT structure. The only new feature is actually a cobbled up homage to the ACT science test in the form of occasional table or graph analysis.

Writing

I am so flummoxed by the overt plagiarism in this section that I seriously wonder if the test I have isn’t a fake, designed to flush out leaks within the College Board. This can’t be serious.

The old PSAT/SAT format consisted of three question types: Sentence Improvements, Identifying Sentence Error, and Paragraph Improvements. The first two question types presented a single sentence. In the first case, the student would identify a correct (or improved) version or say that the given version was best (option A). In the ISEs, the student had to read the sentence cold with no alternatives and indicate which if any underlined word or phrase was erroneous (much, much more difficult, option E was no change). In Paragraph Improvements, the reader had to answer grammar or rhetoric questions about a given passage. All questions had five options.

The ACT English section is five passages running down the left hand side of the page, with underlined words or phrases. As the tester goes along, he or she stops at each underlined section and looks to the right for a question. Some questions are simple grammar checks. Others ask about logic or writing choices—is the right transition used, is the passage redundant, what would provide the most relevant detail. Each passage has 15 questions, for a total of 75 questions in 45 minutes (9 minutes per passage, or 36 seconds per question). The tester has four choices and the “No Change” option is always A.

The new PSAT/SAT Writing/Language section is four passages running down the left hand side of the page, with underlined words or phrases. As the tester goes along, he or she stops at each underlined section and looks to the right for a question. Some questions are simple grammar checks. Others ask about logic or writing choices—is the right transition used, is the passage redundant, what would provide the most relevant detail. Each passage has 11 questions, for a total of 44 questions in 35 minutes (about 8.75 minutes per passage or 47 seconds a question). The tester has four choices and the “No Change” option is always A.

Oh, did I forget? Sometimes the tester has to analyze a graph.

The College Board appears to have simply stolen not only the structure, but various common question types that the ACT has used for years—as long as I’ve been coaching the test, which is coming on for twelve years this May.

I’ll give some samples, but this isn’t a random thing. The entire look and feel of the ACT English test has been copied wholesale—I’ll add “in my opinion” but don’t know how anyone could see this differently.

Writing Objective:

Style and Logic:

Grammar/Punctuation:

tl;dr: The College Board ripped off the ACT English test. I don’t really understand copyright law, much less plagiarism. But if the American College Test company is not considering legal action, I’d love to know why.

The PSAT reading and writing sections don’t ramp up dramatically in difficulty. Timing, yes. But the vocabulary load appears to be similar.

The College Board and the poorly informed reporters will make much of the data analysis questions, but I hope to see any such claims addressed in the context of the ACT’s considerably more challenging data analysis section. The ACT should change the name; the “Science” section only uses science contexts to test data analysis. All the College Board has done is add a few questions and figures. Weak tea compared to the ACT.

As I predicted, The College Board has definitely chosen to make the test more difficult for gaming. I’ve been slowly untangling the process by which someone who can barely speak English is able to get a high SAT verbal and writing score, and what little I know suggests that all the current methods will have to be tossed. Moving to longer passages with less time will reward strong readers, not people who are deciphering every word and comparing it to a memory bank. And the sentence completions, which I quite liked, were likely being gamed by non-English speakers.

In writing, leaving the plagiarism issue aside for more knowledgeable folk, the move to passage-based writing tests will reward English speakers with lower ability levels and should hurt anyone with no English skills trying to game the test. That can only be a good thing.

Of course, that brings up my larger business question that I addressed in the competitive advantage piece: given that Asians show a strong preference for the SAT over the ACT, why would Coleman decide to kill the golden goose? But I’ll put big picture considerations aside for now.

Here’s my evaluation of the math section.


The SAT is Corrupt. No One Wants to Know.

“We got a recycled test, BTW. US March 2014.”.

This was posted on the College Confidential site, very early in the morning on December 6, the test date for the international SAT.

Did you get it?

Get what?

I mean how do you know it was a recycled Marhc test? Do you have the March Us test?

Oh, no. I just typed in one of the math questions from today’s test and the March US 2014 forum popped right up.

And of course, the March 2014 test thread has all the answers spelled out. The kids (assuming it’s kids) build a Google doc in which they compile all the questions and answers.

This is a pattern that goes on for every SAT, both domestic and international. The kids clearly are using technology during the test. They acknowledge storing answers on their calculators, but don’t explain what allows them to remember all the sentence completions, reading questions and even whole passages verbatim, much less post their entire essay online. Presumably, they are using their phones to capture the images?

They create a google doc, in which they recreate as many of the questions as can be remembered (in many cases, all) and then they chew over the answers. By the end of the collaboration, they have largely recreated the test. They used to post links to openly with any request. But recently the College Confidential moderators, aware that their site is being exposed as a cheating venue, have cracked down on requests for the link, while banning anyone who links to the document.

So floating out there somewhere in the Internet are copies of the actual test, which many hagwons put out (and pull them down because hey, no sense letting people have them for free), as well as the results of concentrated braindumping by hundreds of testers.

For international students, “studying for the SAT” doesn’t mean increasing math and vocabulary skills, but rather memorizing the answers of as many tests as possible.

And those are just the kids that aren’t paying for the answers.

The wealthy but not super-rich parents who want a more structured approach pay cram schools–be they hagwons, jukus or buxiban–to provide kids with all the recycled tests and memorize every question. No, not learn the subject. Memorize. As described here, cram schools provide a “key king”, a compilation of all the answer sequences for sections, using all the potential international tests. They know which ones will be recycled because the CB “withholds” these tests.

Of course, the super-rich parents don’t want to fuss their kids with all that memorizing. Cram schools have obtained copies of all the potential international tests by paying testers to photograph them. Then they pay someone to take the SAT in the earliest time zone for the International, and disseminate the news via text to all the testers. They just copy the answers from the pictures. Using phones. Which they have told the proctors they don’t have, of course.

I don’t know exactly how all this works—for example, are the cram schools offering tiered pricing for key kings vs. phoned in answers? Do different cram schools have different offerings? I’ve read through the documented process provided by Bob Schaeffer of FairTest (a guy I don’t often agree with), and it seems very credible. He’s also provided a transcript of an offer to provide answers to the test. Valerie Strauss got on the record accounts of this process from two international administrators, Ffiona Rees and Joachim Ekstrom.

Every so often Alexander Russo complains that Valerie Strauss shouldn’t do straight education reporting, given her open advocacy against reform.

Great. So where’s all the other hard reporting on this topic? The New York Times, whose public editor Margaret Sullivan just encouraged to “to enlighten citizens, hold powerful people and institutions accountable and maybe even make the world a better place”, bleeds for the poor Korean and Chinese testers anxious for their scores and concerned they’ll be tarred with the same brush. Everyone else just spits out the College Board press release–if they mention it at all. While most news outlets reported the October cancellation, few other than Strauss reported that the November and December international tests scores were delayed as well.

At the same time Strauss reported the College Board is stonewalling any inquiries as to how many kids were cheating, how many scores were cancelled, or what it was doing to prevent further corruption, an actual Post “reporter”, Anna Fifield, regurgitates a promotional ad for a Korean SAT equivalent coach.*

Well, you can understand why. The millionaire Korean test prep coach-called-a-teacher story is one of the woefully underreported stories of the 21st century. I mean, we only had one promo put out by the Wall Street Journal the year before, and another glowing testimonial CBS a few months later (even mentioning the tops in performance, bottom in happiness poll). But really, only one or two a year of these stories have been coming out since 2005.

So you can see why the Post felt another story on a Korean test prep instructor making millions required immediate exposure, if not anything approaching investigation or reporting.

These stories are catnip to reporters who get all their education facts from The Big Book Of Middlebrow Education Shibboleths. First, unlike our cookie cutter teacher tenure system, Korean teachers work in a real meritocracy where kids and their parents reward excellence with cash. Take that, teachers!

Then, unlike American moms and dads, Korean parents care about their kids and put billions into their education. Take that, parents!

And oy, the faith Anna shows in her subjects. Cha is a “top-ranked math teacher” who “says” he earns a “cool $8 million last year.” Cha says he’s been teaching for 20 years, but refuses to give his age and there’s no mention of the topic or school he attended for his PhD, or if he ever got one. But he’s got a really popular video, so he must be great!

Some outlets are less adulatory. The Financial Times points out that the Korean government is cracking down on hagwon fees and operating hours, and preventing them from pre-teaching topics. Megastudy, the company in the 2005 story linked in above, just went up for sale because of those government changes. Michael Horn of the Christiansen Institute is doing no small part to alert people to the madness of the Korean system. The New York Times, despite its tears for the Korean and Chinese testers, has done its fair share to report on the endemic cheating in Chinese college applications.

But when it comes to the College Board and the SAT, everyone seems to be hands off the international market. At what point will it occur to reporters to seriously investigate whether a large chunk of the money spent on cram schools is not for instruction, but for “prior knowledge” cheating? When will they ask the Korean cram school instructors if they are fronts for an organized criminal conspiracy, if the money they get is not for tutoring, but for efficient delivery of test answers on test day? And how many of those test days are run by the College Board?

People think “well, sure, there’s some cheating, but so what? Some kids cheat.” Yeah, like I’d be writing this if it were a few dozen, or even a few hundred kids. Asian immigrants cheating on major tests in this country is in the high hundreds a year. Maybe more. In China and Korea? I suspect it’s beyond our comprehension, us ethical ‘murricans.

One of the depressing things about the past three years is that I start looking into things more closely. I never really trusted the media, mind you, but I did assume that journalists skewed stories because of bias. I fondly imagined, silly me, that journalists wanted to investigate real wrongdoing. Yes. Laugh at my foolish innocence.

Consider what would be disrupted if public American pressure forced the College Board to end endemic international student cheating. First, the CB would lose millions but weep no tears, it’s a non-profit company. hahahahah! Yeah, that makes me laugh, too.

But public universities increasingly rely on international student fees and the pretense that they are qualified to do college work. After all, the thinking goes, we accept a lot of Americans who aren’t prepared for college work—may as well take in some kids who pay full freight. Private schools, too, appreciate the well-heeled Chinese students who don’t expect tuition discounts.

So suppose public pressure forces the College Board to use brand new tests for the overseas market, require all international testing to be done at US international schools, use different tests at different locations. The College Board might decide that the international market profits weren’t worth the hassle for other than US students living abroad (as indeed, the ACT seems to have done for years). Either way, a crackdown on testing security would seriously compromise Chinese and Korean students’ ability to lie about their college readiness and English skills.

A wide swath of public universities would either have to forego those delightful international fees or simply waive the SAT requirement, but without those inflated test scores it will be tough to justify letting in these kids over the huge chunk of white and Asian Americans who are actually qualified. No foreign students, more begging for money from state legislatures. Private universities would have a difficult time bragging about their elite international students without the SAT scores to back thing up.

Plus, hell, we changed the source country for zombies because we didn’t want to piss off China. Three years ago, the College Board wanted to open up mainland China as a market. 95% of the SAT testers in Hong Kong are Chinese. Stop all that money flowing around? People are going to be annoyed.

At this point, I start to feel too conspiratorial, and go back to figuring that reporters just don’t care. I’ve got a lot of respect for education policy reporters—the Edweek reporters are excellent on most topics—and most reporters do a good job some of the time.

But the SAT is basically corrupt in the international market. I’ve already written about test and grade corruption among recent Asian immigrants over here, particularly in regards to the Advanced Placement tests and grades.

Yet no one seems to really care. Sure, people disapprove of the SAT, but for all the wrong reasons: it’s racist, it’s nothing more than an income test, it reinforces privilege, it has no relationship to actual ability. None of these proffered reasons for hating the SAT have any relationship to reality. But that the SAT is this huge money funnel, taking money from states and parents and shoveling it directly or indirectly into the College Board, universities, and the companies who have essentially broken the test? Eh. Whatever.

The people who are hurt by this: middle and lower middle class whites and Asian Americans. So naturally, who gives a damn?

enlighten citizens, hold powerful people and institutions accountable and maybe even make the world a better place

Sigh. Happy New Year.

*****************************
*In the comments, an actual SAT prep coach making millions–no, really, he assures us, millions!–simply by being a fabulous coach with stupendous methods is insulted that I insinuated that the Washington Post story was on an SAT prep coach, rather than the Korean equivalent of the SAT. I knew that, but at one point referred to the guy as a SAT prep coach. I fixed the text.


SAT Writing Tests–A Brief History

I have a bunch of different posts in the hopper right now, but after starting a mammoth comment on this brand new E. D. Hirsch post (Welcome to blogging, sir!), I decided to convert it to a post—after all, I need the content. (Well, it was brand new when I started this post, anyway.)

Hirsch is making a larger point about Samuel Messick’s concern with consequential validity versus construct validity but he does so using the history of the SAT. In the 80s, says Hirsch, the ETS devised a multiple choice only method of testing writing ability, which was more accurate than an essay test. But writing quality declined, he implies, because students believed that writing wasn’t important. But thanks to Messick, the SAT finally included a writing sample in its 2005 changes.

I have nothing more than a layman’s understanding of construct vs. consequential validity, and Hirsch’s expertise in the many challenges of assessing writing ability is unquestioned, least of all by me. But I know a hell of a lot about the SAT, and what he writes here just didn’t match up with what I knew. I went looking to confirm my knowledge and fill any gaps.

First, a bit of actual SAT writing assessment history:

  • By 1950, the CEEB (College Board’s original name) had introduced the English Composition Achievement Test. The original test had six sections, three multiple choice, three essay (or free response). The CEEB began experimenting with a full 2-hour essay the next year, and discontinued that in 1956. At that point, I believe, the test was changed to 100 question multiple choice only. (Cite for most of this history; here’s a second cite but you need to use the magnifying glass option.)
  • In 1960, the CEEB offered an unscored writing sample to be taken at the testing center, at the universities’ request, which would be sent on to the schools for placement scoring. (I think this was part of the SAT, but can’t be sure. Anyone have a copy of “The Story Behind the First Writing Sample”, by Fred Godshalk?)
  • In 1963, the English Composition Achievement Test was changed to its most enduring form: a 20 minute essay, followed by a 40-minute multiple choice section with 70 questions.
  • In 1968, the CEEB discontinued the unscored writing sample, again at the universities’ request. No one wanted to grade the essays.
  • In 1971, the CEEB discontinued the essay in the ECAT , citing cost concerns.
  • In 1974, the SAT was shortened from 3 hours to 2 hours and 45 minutes, and the Test of Standard Written English was added. The TSWE was multiple choice only, with questions clearly similar to the English Composition Achievement Test. The score is not included in the SAT score, but reported to colleges separately, to be used for placement.
  • In 1976, in response to complaints, the essay version of the ECAT was reinstated. (It may or may not be significant that four years later, the ETS ran its first deficit.) From what I can tell, the ECAT and the TSWE process remained largely unchanged from 1976 through 1994. This research paper shows that the essay was part of the test throughout the 80s.
  • In 1993, all achievement tests were rebranded as SAT II; the English Composition Achievement Test was renamed to the SAT II Writing exam. At some point, the SAT II was shortened from 70 to 60 questions, but I can’t find out when.
  • In 1994 , there were big changes to the SAT: end to antonyms, calculators allowed, free response questions in math. While the College Board had originally intended to add a “free response” to the verbal section (that is, an essay), pressure from the University of California, the SAT’s largest customer, forced it to back down (more on this later). At this time, the TSWE was discontinued. Reports often said that the SAT Writing exam was “new”; I can find no evidence that the transition from the ECAT to the SAT II was anything but seamless.
  • In 1997, the College Board added a writing section to the PSAT that was clearly derived from the TSWE.
  • In 2005, the College Board added a writing section to the SAT. The writing section has three parts: one 25 minute essay and two multiple choice sections for a total of 49 questions. The new writing test uses the same type of questions as the ECAT/SAT II, but the essay prompt is simpler (I can personally attest to this, as I was a Kaplan tutor through the transition).
  • By the way, the ACT never required an essay until 2005, when compliance with UC’s new requirement forced it to add an optional essay.

I’m sure only SAT geeks like me care about this, but either Hirsch is wrong or all my links are wrong or incomplete. First, even with his link, I can’t tell what he’s referring to when he says “ETS devised a test…”. A few sentences before, he places the date as the early 80s. The 80s were the one decade of the past five in which the College Board made no changes to any of its writing tests. So what test is he referring to?

I think Hirsch is referring to the TSWE, which he apparently believes was derived in the early 80s, that it was a unique test, and that the College Board replaced the TSWE with the required essay in 2005. This interpretation of his errors is the only way I can make sense of his explanation.

In that case, not only are his facts wrong, but this example doesn’t support his point. The SAT proper did not test written English for admissions. The TSWE was intended for placement, not admissions. Significantly, the ACT was starting to pick up market share during this time, and the ACT has always had an excellent writing test (multiple choice, no essay). Without the TSWE, the SAT lacked a key element the ACT offered, and saying “Hey, just have your students pay to take this extra test” gave the ACT an even bigger opening. This may just possibly have played into the rationale for the TSWE.

Colleges that wanted an SAT essay test for admissions (as opposed to placement) had won that battle with the English Composition Achievement Test. The CEEB bowed to the pressures of English teachers not in 2005, but in 1963, when it put the essay back into the ECAT despite research showing that essays were unreliable and expensive. After nine years of expense the CEEB believed to be unnecessary, it tried again to do away with the essay, but the same pressures forced it to use the essay on the English Composition Achievement Test/SAT II Writing Test from 1976 to 2005, when the test was technically discontinued, but actually shortened and incorporated into the SAT proper as the SAT Writing test. Any university that felt strongly about using writing for admissions could just require the ECAT. Many schools did, including the University of California, Harvard, Stanford, and most elite schools.

The College Board tried to put an essay into the test back in the 90s, but was stopped not because anyone was concerned about construct or consequential validity, but because its largest customer, the University of California, complained and said it would stop using the SAT if an essay was required. This struck me as odd at first, because, as I mentioned, the University of California has required that all applicants take the English Composition Achievement test since the early 60s. However, I learned in the link that that Achievement Test scores weren’t used as an admissions metric until later in the 90s. In 1994, UC was using affirmative action so wasn’t worried about blacks and Hispanics. Asians, on the other hand, had reason to be worried about an essay test, since UC had already been caught discriminating against them, and UC clearly felt some placation was in order. Later, after the affirmative action ban, UC did a 180 on the essay, requiring that an essay be added to the SAT in 2005.

Why did the College Board want to put an essay in the SAT in 1994, and why did UC change its position 11 years later? My opinion: by then the College Board was getting more efficient at scoring essays, and the ECAT/SAT II Writing wasn’t catching on with any other than elite schools and UC. If the Writing test was rolled into the SAT, the College Board could charge more money. During the 90s we saw the first big push against multiple choice tests in favor of “performance-based assessments” (Hirsch has a whole chapter in one of his books about these misconceptions), giving the College Board a perfect rationale for introducing an essay and charging a lot more money. But UC nixed the essay until 2002, when its list of demands to the College Board called for for removing analogies, quantitative comparisons, and—suddenly—demanding that the writing assessment be rolled into the main SAT (page 15 of the UC link). I can see no reason for this—at that time, UC still required Subject tests, so why couldn’t applicants take the writing test when they took their other two Subject tests? The only reason—and I mean the only reason—I can see for rolling the writing test into the main SAT comes down to profit: the change made the College Board a hell of a lot of money.

Consider: the College Board already had the test, so no development costs beyond dumbing the test down for the entire SAT population (fewer questions, more time for the essay). So a test that only 10% of the testing population paid for could now be sold to 100% of the testing population. The 2005 SAT was both longer (in time) and shorter (in total questions), and a hell of a lot more expensive. Win win.

So UC’s demand gave the College Board cover. Fair’s fair, since UC had no research rationale whatsoever in demanding the end to analogies and quantitative comparisons, changes that would cost the College Board a great deal of money. Everyone knows that California’s ban on affirmative action has made UC very, very unhappy and if I were to assert without foundation that UC hoped and believed that removing the harder elements of the SAT would reduce the achievement gap and enable the university to admit more blacks and Hispanics, well, I’d still get a lot of takers. (Another clue: UC nearly halved the math test burden requirement at the same time—page 16 of the UC link.) (Oh, wait—Still another clue: Seven years later, after weighting the subject tests more heavily than the SAT and threatening to end the SAT requirement altogether, UC ends its use of….the Subject tests. Too many Asians being “very good at figuring out the technical requirements of UC eligibility”.)

So why does any of this matter?

Well, first, I thought it’d be useful to get the history in one place. Who knows, maybe a reporter will use it some day. Hahahahaha. That’s me, laughing.

Then, Hirsch’s assertion that the “newly devised test”, that is, the TSWE, led to a great decline in student writing ability is confusing, since the TSWE began in 1974, and was discontinued twenty years later. So when did the student writing ability decline? I’ve read before now that the seventies, not the eighties, saw writing nearly disappear from the high school curriculum (but certainly Hirsch knows about Applebee, way more than I do). If anything, writing instruction has improved, but capturing national writing ability is a challenge (again, not news to Hirsch). So where’s the evidence that student writing ability declined over the time of the TSWE, which would be 1974-1994? Coupled with the evidence that writing ability has improved since the SAT has achieved “consequential validity”?

Next, Hirsch’s history ignores the ECAT/SAT II Writing test, which offers excellent research opportunities for the impact of consequential validity. Given that UC has required a test with an essay for 50 years, Hirsch’s reasoning implies that California students would have stronger writing curriculum and abilities, given that they faced an essay test. Moreover, any state university that wanted to improve its students’ writing ability could just have required the ECAT/SAT Writing test—yet I believe UC was the only public university system in the country with that requirement. For that matter, several states require all students to take the ACT, but not the essay. Perhaps someone could research whether Illinois and Colorado (ACT required) have a weaker writing curriculum than California.

Another research opportunity might involve a comparison between the College Board’s choices and those driving American College Testing, creator of the ACT and the SAT’s only competition. I could find no evidence that the ACT was subjected to the on-again, off-again travails of the College Board’s English/Writing essay/no essay test. Not once did the College Board point to the ACT and say to all those teachers demanding an essay test, “Hey, these guys don’t have an essay, so why pick on us?” The ACT, from what I can see, never got pressured to offer an essay. This suggests, again, that the reason for all the angst over the years came not from dissatisfaction with the TSWE, but rather the Achievement/SAT II essay test, and the College Board’s varying profit motives over the years.

Finally, Hirsch’s example also assumes that the College Board, universities, high school teachers, and everyone else in 2005 were thinking about consequential or construct validity in adding the essay. I offer again my two unsupported assertions: The College Board made its 1994 and 2005 changes for business reasons. The UC opposed the change in 1994 and demanded it in 2005 for ideological reasons, to satisfy one of its various identity groups. Want to argue with me? No problem. Find me some evidence that UC was interested in anything other than broadening its admissions demographic profile in the face of an affirmative action ban, and any evidence that the College Board made the 2005 changes for any other reason than placating UC. Otherwise, the cynic’s view wins.

On some later date, I’ll write up my objections to the notion that the essay test has anything to do with writing ability, but they pulled the focus so I yanked them from this post.

By the way, I have never once met a teacher, except me, who gives a damn about helping his or her students prepare for the SAT. Where are these teachers? Can we take a survey?

Every so often, I wonder why I spend hours looking up data to refute a fairly minor point that no one really cares about in the first place and yes, this is one of those times. But dammit, I want things like this to matter. I don’t question Hirsch’s goals and agree with most of them. But I am bothered by the simplification or complete erasure of history in testing, and Hirsch, of all people, should value content knowledge.

Yeah, I did say “brief”, didn’t I? Sorry.