Monthly Archives: April 2015

The Day of Three Miracles

I often hook illustrative anecdotes into essays making a larger point. But this anecdote has so many applications that I’m just going to put it out there in its pure form.

A colleague who I’ll call Chuck is pushing the math department to set a department goal. Chuck is in the process of upgrading our algebra 1 classes, and his efforts were really improving outcomes for mid to high ability levels, although the failure rates were a tad terrifying. He has been worried for a while that the successful algebra kids would be let down by subsequent math teachers who would hold his kids to lower standards.

“If we set ourselves the goal of getting one kid from freshman algebra all the way through to pass AP Calculus, we’ll improve instruction for everyone.” (Note: while the usual school year doesn’t allow enough time, our “4×4 full-metal block” schedule makes it possible for a dedicated kid to take a double year of math if he chooses).

Chuck isn’t pushing this goal for the sake of that one kid, as he pointed out in a recent meeting. “If we are all thinking about the kid who might make it to calculus, we’ll all be focused on keeping standards high, on making sure that we are teaching the class that will prepare that kid–if he exists–to pass AP Calculus.”

I debated internally, then spoke up. “I think the best way to evaluate your proposal is by considering a second, incompatible objective. Instead of trying to prepare every kid who starts out behind as if he can get to calculus, we could try to improve the math outcomes for the maximum number of students.”

“What do you mean?”

“We could look at our historical math completion patterns for entering freshmen algebra students, and try to improve on those outcomes. Suppose that a quarter of our freshmen take algebra. Of those students, 10% make it to pre-calc or higher. 30% make it to trigonometry, 50% make it to algebra 2, and the other 10% make it to geometry or less. And we set ourselves the goal of reducing the percentages of students who get no further than geometry or even, ideally, algebra 2, while increasing the percentages of kids who make it into trigonometry and pre-calc by senior year.”

“That’s what will happen with my proposal, too.”

“No. You want us to set standards higher, to ensure that kids getting through each course are only those qualified enough to go to Calculus and pass the AP test. That’s a small group anyway, and while you’re more sanguine than I am about the efficacy of instruction on academic outcomes, I think you’ll agree that a large chunk of kids simply won’t be the right combination of interested and capable to go all the way through.”

“Yes, exactly. But we can teach our classes as if they are.”

“Which means we’ll lose a whole bunch of kids who might be convinced to try harder to pass advanced math classes that weren’t taught as if the only objective was to pass calculus. Thus those kids won’t try, and our overall failure rate will increase. This will lower math completion outcomes.”

Chuck waved this away. “I don’t think you understand what I’m saying. There’s nothing incompatible about increasing math completion and setting standards high enough to get kids from algebra to calculus. We can do both.”

I opened my mouth…and decided against further discussion. I’d made my point. Half the department probably agreed with me. So I decided not to argue. No, really. It was, like, a miracle.

Chuck asked us all to think about committing to this instruction model.

Later that day, I ran into Chuck in the copyroom, and lo, a second miracle took place.

“Hey,” he said. “I just realized you were right. We can’t have both. If we get the lowest ability kids motivated just to try, we have to have a C to offer them, and that lowers the standard for a C, which ripples on up. We can’t keep kids working for the highest quality of A if we lower the standards for failure.”

Both copiers were working. That’s three.

**************************************************************

I do not discuss my colleagues to trash them, and if this story in any way reflects negatively on Chuck it’s not intentional. Quite the contrary, in fact. Chuck took less than a day to grasp my point and realized his goal was impossible. We couldn’t enforce higher standards in advanced math without dooming far more kids to failure, which would never be tolerated.

Thus the two of us collapsed a typical reform cycle to six hours from the ten years our country normally takes to abandon a well-meant but impossible chimera.

Many of my readers will understand the larger point implicitly. For those wondering why I chose to tell this story now, I offer up Marc Tucker, whose twopart epic on American education’s purported failures illustrates everything that’s wrong with educational thinking today. I would have normally gone into greater detail enumerating the flaws in reasoning, facts, and ambition but that’s a lot of work and this is a damn good anecdote.

Some other work of mine that strikes me as related:

I think I’ve written about my suggested solution somewhere, but where…(rummages)….oh, yes. Here it is: Philip Dick, Preschool and Schrödinger’s Cat–the last few paragraphs.

“Reality is that which, when you stop believing in it, doesn’t go away.”

When everyone finally accepts reality, we can start crafting an educational policy that will actually improve on our current system, which does a much better job than most people understand.

But that’s a miracle for another day.


Evaluating the New PSAT: Math

Well, after the high drama of writing, the math section is pretty tame. Except the whole oh, my god, are they serious? part. Caveat: I’m assuming that the SAT is still a harder version of the PSAT, and that this is a representative test.

Metric

Old SAT

Old PSAT

ACT

New PSAT
Questions
 

54 
44 MC, 10 grid

38 
28 MC, 10 grid

60 MC 
 

48 
40 MC, 8 grid

Sections
 
 

1: 20 q, 25 m 
2: 18 q, 25 m 
3: 16 q, 20 m

1: 20 q, 25 m 
2: 18 q, 25 m
 

1: 60 q, 60 m 
 
 

NC: 17 q, 25 m 
Calc: 31 q, 45 m
 
MPQ
 
 

1: 1.25 mpq 
2: 1.38 mpq
3: 1.25 mpq

1: 1.25 mpq 
2: 1.38 mpq
 

1 mpq 
 
 

NC: 1.47 mpq 
Calc: 1.45 mpq
 
Category 
 
 
 
 
 
 

Number Operations 
Algebra & Functions
Geometry & Measurement
Data & Statistics
 
 
 

Same  
 
 
 
 
 
 

Pre-algebra 
Algebra
elem & intermed.
Geometry
coord & plane
Trigonometry
 
 
1) Heart of Algebra 
2) Passport to
Advanced Math
3) Probability &
4) Data Analysis
Additional Topics
in math
 

It’s going to take me a while to fully process the math section. For my first go-round, I thought I’d point out the instant takeaways, and then discuss the math questions that are going to make any SAT expert sit up and take notice.

Format
The SAT and PSAT always gave an average of 1.25 minutes for multiple choice question sections. On the 18 question section that has 10 grid-ins, giving 1.25 minutes for the 8 multiple choice questions leaves 1.5 minutes for each grid in.

That same conversion doesn’t work on the new PSAT. However, both sections have exactly 4 grid-ins, which makes a nifty linear system. Here you go, boys and girls, check my work.

The math section that doesn’t allow a calculator has 13 multiple choice questions and 4 grid-ins, and a time limit of 25 minutes. The calculator math section has 27 multiple choice questions and 4 grid-ins, and a time limit of 45 minutes.

13x + 4y = 1500
27x + 4y = 2700

Flip them around and subtract for
14x = 1200
x = 85.714 seconds, or 1.42857 minutes. Let’s round it up to 14.3
y = 96.428 seconds, or 1.607 minutes, which I shall round down to 1.6 minutes.

If–and this is a big if–the test is using a fixed average time for multiple choice and another for grid-ins, then each multiple choice question is getting a 14.4% boost in time, and each grid-in a 7% boost. But the test may be using an entirely different parameter.

Question Organization

In the old SAT and ACT, the questions move from easier to more difficult. The SAT and PSAT difficulty level resets for the grid-in questions. The new PSAT does not organize the problems by difficulty. Easy problems (there are only 4) are more likely to be at the beginning, but they are interlaced with medium difficulty problems. I saw only two Hard problems in the non-calculator section, both near but not at the end. The Hard problems in the calculator section are tossed throughout the second half, with the first one showing up at 15. However, the coding is inexplicable, as I’ll discuss later.

As nearly everyone has mentioned, any evaluation of the questions in the new test doesn’t lead to an easy distinction between “no calc” and “calc”. I didn’t use a calculator more than two or three times at any point in the test. However, the College Board may have knowledge about what questions kids can game with a good calculator. I know that the SAT Math 2c test is a fifteen minute endeavor if you get a series of TI-84 programs. (Note: Not a 15 minute endeavor to get the programs, but a 15 minute endeavor to take the test. And get an 800. Which is my theory as to why the results are so skewed towards 800.) So there may be a good organizing principle behind this breakdown.

That said, I’m doubtful. The only trig question on the test is categorized as “hard”. But the question is simplicity itself if the student knows any right triangle trigonometry, which is taught in geometry. But for students who don’t know any trigonometry, will a calculator help? If the answer is “no”, then why is it in this section? Worse, what if the answer is “yes”? Do not underestimate the ability of people who turned the Math 2c into a 15 minute plug and play to come up with programs to automate checks for this sort of thing.

Categories

Geometry has disappeared. Not just from the categories, either. The geometry formula box has been expanded considerably.

There are only three plane geometry questions on the test. One was actually an algebra question using the perimeter formula Another is a variation question using a trapezoid’s area. Interestingly, neither rectangle perimeter nor trapezoid formula were provided. (To reinforce an earlier point, both of these questions were in the calculator section. I don’t know why; they’re both pure algebra.)

The last geometry question really involves ratios; I simply picked the multiple choice answer that had 7 as a factor.

I could only find one coordinate geometry question, barely. Most of the other xy plane questions were analytic geometry, rather than the basic skills that you usually see regarding midpoint and distance–both of which were completely absent. Nothing on the Pythagorean Theorem, either. Freaky deaky weird.

When I wrote about the Common Core math standards, I mentioned that most of geometry had been pushed down into seventh and eighth grade. In theory, anyway. Apparently the College Board thinks that testing geometry will be too basic for a test on college-level math? Don’t know.

Don’t you love the categories? You can see which ones the makers cared about. Heart of Algebra. Passport to Advanced Math! Meanwhile, geometry and the one trig question are stuck under “Additional Topic in Math”. As opposed to the “Additional Topic in History”, I guess.

Degree of Difficulty;

I worked the new PSAT test while sitting at a Starbucks. Missed three on the no-calculator section, but two of them were careless errors due to clatter and haste. In one case I flipped a negative in a problem I didn’t even bother to write down, in the other I missed a unit conversion (have I mentioned before how measurement issues are the obsessions of petty little minds?)

The one I actually missed was a function notation problem. I’m not fully versed in function algebra and I hadn’t really thought this one through. I think I’ve seen it before on the SAT Math 2c test, which I haven’t looked at in years. Takeaway— if I’m weak on that, so are a lot of kids. I didn’t miss any on the calculator section, and I rarely used a calculator.

But oh, my lord, the problems. They aren’t just difficult. The original, pre-2005 SAT had a lot of tough questions. But those questions relied on logic and intelligence—that is, they sought out aptitude. So a classic “diamond in the rough” who hadn’t had access to advanced math could still score quite well. Meanwhile, on both the pre and post 2005 tests, kids who weren’t terribly advanced in either ability or transcript faced a test that had plenty of familiar material, with or without coaching, because the bulk of the test is arithmetic, algebra I, and geometry.

The new PSAT and, presumably, the SAT, is impossible to do unless the student has taken and understood two years of algebra. Some will push back and say oh, don’t be silly, all the linear systems work is covered in algebra I. Yeah, but kids don’t really get it then. Not even many of the top students. You need two years of algebra even as a strong student, to be able to work these problems with the speed and confidence needed to get most of these answers in the time required.

And this is the PSAT, a test that students take at the beginning of their junior year (or sophomore, in many schools), so the College Board has created a test with material that most students won’t have covered by the time they are expected to take the test. As I mentioned earlier, California alone has nearly a quarter of a million sophomores and juniors in algebra and geometry. Will the new PSAT or the SAT be able to accurately assess their actual math knowledge?

Key point: The SAT and the ACT’s ability to reflect a full range of abilities is an unacknowledged attribute of these tests. Many colleges use these tests as placement proxies, including many, if not most or all, of the public university systems.

The difficulty level I see in this new PSAT makes me wonder what the hell the organization is up to. How can the test will reveal anything meaningful about kids who a) haven’t yet taken algebra 2 or b) have taken algebra 2 but didn’t really understand it? And if David Coleman’s answer is “Those testers aren’t ready for college so they shouldn’t be taking the test” then I have deep doubts that David Coleman understands the market for college admissions tests.

Of course, it’s also possible that the SAT will yield the same range of scores and abilities despite being considerably harder. I don’t do psychometrics.

Examples:

newpsatmath10

Here’s the function question I missed. I think I get it now. I don’t generally cover this degree of complexity in Precalc, much less algebra 2. I suspect this type of question will be the sort covered in new SAT test prep courses.

mathnocalcquads

These two are fairly complicated quadratic questions. The question on the left reveals that the SAT is moving into new territory; previously, SAT never expected testers to factor a quadratic unless a=1. Notice too how it uses the term “divisible by x” rather than the more common term, “x is a factor”. While all students know that “2 is a factor of 6” is the same as “6 is divisible by 2”, it’s not a completely intuitive leap to think of variable factors in the same way. That’s why we cover the concept–usually in late algebra 2, but much more likely in pre-calc. That’s when synthetic division/substitution is covered–as I write in that piece, I’m considered unusual for introducing “division” of this form so early in the math cycle.

The question on the right is a harder version of an SAT classic misdirection. The test question doesn’t appear to give enough information, until you realize it’s not asking you to identify the equation and solve for a, b, and c–just plug in the point and yield a new relationship between the variables. But these questions always used to show up in linear equations, not quadratics.

That’s the big news: the new PSAT is pushing quadratic fluency in a big way.

Here, the student is expected to find the factors of 1890:

newpsatperimeter

This is a quadratic system. I don’t usually teach these until Pre-Calc, but then my algebra 2 classes are basically algebra one on steroids. I’m not alone in this.

No doubt there’s a way to game this problem with the answer choices that I’m missing, but to solve this in the forward fashion you either have to use the quadratic formula or, as I said, find all the factors of 1890, which is exactly what the answer document suggests. I know of no standardized test that requires knowledge of the quadratic formula. The old school GRE never did; the new one might (I don’t coach it anymore). The GMAT does not require knowledge of the quadratic formula. It’s possible that the CATs push a quadratic formula question to differentiate at the 800 level, but I’ve never heard of it. The ACT has not ever required knowledge of the quadratic formula. I’ve taught for Kaplan and other test prep companies, and the quadratic formula is not covered in most test prep curricula.

Here’s one of the inexplicable difficulty codings I mentioned–this is coded as of Medium difficulty.

As big a deal as that is, this one’s even more of a shock: a quadratic and linear system.

newpsatsystemlineparabola

The answer document suggests putting the quadratic into vertex form, then plugging in the point and solving for a. I solved it with a linear system. Either way, after solving the quadratic you find the equation of the line and set them equal to each other to solve. I am….stunned. Notice it’s not a multiple choice question, so no plug and play.

Then, a negative 16 problem–except it uses meters, not feet. That’s just plain mean.
newpsatmathneg16

Notice that the problem gives three complicated equations. However, those who know the basic algorithm (h(t)=-4.9t2 + v0 + s0) can completely ignore the equations and solve a fairly easy problem. Those who don’t know the basic algorithm will have to figure out how to coordinate the equations to solve the problem, which is much more difficult. So this problem represents dramatically different levels of difficulty based on whether or not the student has been taught the algorithm. And in that case, the problem is quite straightforward, so should be coded as of Medium difficulty. But no, it’s tagged as Hard. As is this extremely simple graph interpretation problem. I’m confused.

Recall: if the College Board keeps the traditional practice, the SAT will be more difficult.

So this piece is long enough. I have some thoughts–rather, questions–on what on earth the College Board’s intentions are, but that’s for another test.

tl;dr Testers will get a little more time to work much harder problems. Geometry has disappeared almost entirely. Quadratics beefed up to the point of requiring a steroids test. Inexplicable “calc/no calc” categorization. College Board didn’t rip off the ACT math section. If the new PSAT is any indication, I do not see how the SAT can be used by the same population for the same purpose unless the CB does very clever things with the grading scale.


Evaluating the New PSAT: Reading and Writing

The College Board has released a new practice PSAT, which gives us a lot of info on the new SAT. This essay focuses on the reading and writing sections.

As I predicted in my essay on the SAT’s competitive advantage, the College Board has released a test that has much in common with the ACT. I did not predict that the homage would go so far as test plagiarism.

This is a pretty technical piece, but not in the psychometric sense. I’m writing this as a long-time coach of the SAT and, more importantly, the ACT, trying to convey the changes as I see them from that viewpoint.

For comparison, I used these two sample ACT, this practice SAT (old version), and this old PSAT.

Reading

The old SAT had a reading word count of about 2800 words, broken up into eight passages. Four passages were very short, just 100 words each. The longest was 800 words. The PSAT reading count was around 2000 words in six passages. This word count is reading passages only; the SAT has 19 sentence completions to the PSAT’s 13.

So SAT testers had 70 minutes to complete 19 sentence completions and 47 questions over eight passages of 2800 words total. PSAT testers had 50 minutes to complete 13 sentence and 27 questions over six passages of 2000 words total.

The ACT has always had 4 passages averaging 750 words, giving the tester 35 minutes to complete 40 questions (ten for each passage). No sentence completions.

Comparisons are difficult, but if you figure about 45 seconds per sentence completion, you can deduct that from the total time and come up with two rough metrics comparing reading passages only: minutes per question and words per question (on average, how many words is the tester reading to answer the questions).

Metric

Old SAT

Old PSAT

ACT

New PSAT
Word Count

2800

2000

3000

3200
Passage Count

8

6

4

5
Passage Length

100-850

100-850

750

500-800
MPQ

1.18

1.49

1.14

1.27
WPQ

59.57

74.07

75

69.21

I’ve read a lot of assertions that the new SAT reading text is more complex, but my brief Lexile analysis on random passages in the same category (humanities, science) showed the same range of difficulty and sentence lengths for old SAT, current ACT, and old and new PSAT. Someone with more time and tools than I have should do an indepth analysis.

Question types are much the same as the old format: inference, function, vocabulary in context, main idea. The new PSAT requires the occasional figure analysis, which the College Board will undoubtedly flaunt as unprecedented. However, the College Board doesn’t have an entire Science section, which is where the ACT assesses a reader’s ability to evaluate data and text.

Sentence completions are gone, completely. In passage length and overall reading demands, the new PSAT is remarkably similar in structure and word length to the ACT. This suggests that the SAT is going to be even longer? I don’t see how, given the time constraints.

tl;dr: The new PSAT reading section looks very similar to the current ACT reading test in structure and reading demands. The paired passage and the questions types are the only holdover from the old SAT/PSAT structure. The only new feature is actually a cobbled up homage to the ACT science test in the form of occasional table or graph analysis.

Writing

I am so flummoxed by the overt plagiarism in this section that I seriously wonder if the test I have isn’t a fake, designed to flush out leaks within the College Board. This can’t be serious.

The old PSAT/SAT format consisted of three question types: Sentence Improvements, Identifying Sentence Error, and Paragraph Improvements. The first two question types presented a single sentence. In the first case, the student would identify a correct (or improved) version or say that the given version was best (option A). In the ISEs, the student had to read the sentence cold with no alternatives and indicate which if any underlined word or phrase was erroneous (much, much more difficult, option E was no change). In Paragraph Improvements, the reader had to answer grammar or rhetoric questions about a given passage. All questions had five options.

The ACT English section is five passages running down the left hand side of the page, with underlined words or phrases. As the tester goes along, he or she stops at each underlined section and looks to the right for a question. Some questions are simple grammar checks. Others ask about logic or writing choices—is the right transition used, is the passage redundant, what would provide the most relevant detail. Each passage has 15 questions, for a total of 75 questions in 45 minutes (9 minutes per passage, or 36 seconds per question). The tester has four choices and the “No Change” option is always A.

The new PSAT/SAT Writing/Language section is four passages running down the left hand side of the page, with underlined words or phrases. As the tester goes along, he or she stops at each underlined section and looks to the right for a question. Some questions are simple grammar checks. Others ask about logic or writing choices—is the right transition used, is the passage redundant, what would provide the most relevant detail. Each passage has 11 questions, for a total of 44 questions in 35 minutes (about 8.75 minutes per passage or 47 seconds a question). The tester has four choices and the “No Change” option is always A.

Oh, did I forget? Sometimes the tester has to analyze a graph.

The College Board appears to have simply stolen not only the structure, but various common question types that the ACT has used for years—as long as I’ve been coaching the test, which is coming on for twelve years this May.

I’ll give some samples, but this isn’t a random thing. The entire look and feel of the ACT English test has been copied wholesale—I’ll add “in my opinion” but don’t know how anyone could see this differently.

Writing Objective:

Style and Logic:

Grammar/Punctuation:

tl;dr: The College Board ripped off the ACT English test. I don’t really understand copyright law, much less plagiarism. But if the American College Test company is not considering legal action, I’d love to know why.

The PSAT reading and writing sections don’t ramp up dramatically in difficulty. Timing, yes. But the vocabulary load appears to be similar.

The College Board and the poorly informed reporters will make much of the data analysis questions, but I hope to see any such claims addressed in the context of the ACT’s considerably more challenging data analysis section. The ACT should change the name; the “Science” section only uses science contexts to test data analysis. All the College Board has done is add a few questions and figures. Weak tea compared to the ACT.

As I predicted, The College Board has definitely chosen to make the test more difficult for gaming. I’ve been slowly untangling the process by which someone who can barely speak English is able to get a high SAT verbal and writing score, and what little I know suggests that all the current methods will have to be tossed. Moving to longer passages with less time will reward strong readers, not people who are deciphering every word and comparing it to a memory bank. And the sentence completions, which I quite liked, were likely being gamed by non-English speakers.

In writing, leaving the plagiarism issue aside for more knowledgeable folk, the move to passage-based writing tests will reward English speakers with lower ability levels and should hurt anyone with no English skills trying to game the test. That can only be a good thing.

Of course, that brings up my larger business question that I addressed in the competitive advantage piece: given that Asians show a strong preference for the SAT over the ACT, why would Coleman decide to kill the golden goose? But I’ll put big picture considerations aside for now.

Here’s my evaluation of the math section.


Designing Multiple Answer Math Tests

I got the idea for Multiple Answer Tests originally because I wanted to prepare my kids for Common Core Tests. (I’d rather people not use that post as the primary link, as I have done a lot more work since then.)

About six months later (a little over a year ago), I gave an update, which goes extensively into the grading of these tests, if you’re curious. At that time, I was teaching Pre-Calc and Algebra 2/Trig. This past year, I’ve been teaching Trigonometry and Algebra II. I’d never taught trig before, so all my work was new. In contrast, I have a lot of Algebra 2 tests, so I often rework a multiple choice question into a multiple answer.

I thought I’d go into the work of designing a multiple answer test, as well as give some examples of my newer work.

I design my questions almost in an ad hoc basis. Some questions I really like and keep practically intact; others get tweaked each time. I build tests from a mental question database, pulling them in from tests. So when I start a new test, I take the previous unit test, evaluate it, see if I’ve covered the same information, create new questions as needed, pull in questions I didn’t use on an earlier test, whatever. I don’t know how teachers can use the same test time and again. I’d get bored.

I recently realized my questions have a typology. Realizing this has helped me construct questions more clearly, sometimes adding a free response activity just to get the students started down the right path.

The first type of question requires modeling and/or solving one equation completely. The answer choices all involve that one process.

Trigonometry:

matrig1

I’m very proud of this question. My kids had learned how to graph the functions, but we hadn’t yet turned to modeling applications. So they got this cold, and did really well with it. (In the first class, anyway. We’ll see how the next group does in a month or so.) I had to design it in such a way to really telegraph the question’s simplicity, to convince the students to give it a shot.

Algebra II:
maratsimp

The rational expression question is incredibly flexible. I’m probably teaching pre-calc again next year and am really looking forward to beefing this question up with analysis.

Other questions are a situation or graph that can be addressed from multiple aspects. The student ends up working 2 or 3 actual calculations per question. I realized the questions look the same as the previous type, but they represent much more work and I need to start making that clear.

Trigonometry:

mypythruler

Algebra II:
mafurnquest

I love the Pythagorean Ruler question, which could be used purely for plane geometry questions, or right triangle trig. Or both. The furniture question is an early draft; I needed an inverse question and wanted some linear modeling review, so I threw together something that gave me both.

I can also use this format to test fluency on basic functions very efficiently. Instead of wasting one whole question on a trig identity, I can test four or five identities at once.

matrigalg

Or this one, also trig, where I toss in some simplification (re-expression) coupled with an understanding of the actual ratios (cosine and secant), even though they haven’t yet done any graphing. So even if they have graphing calculators (most don’t), they wouldn’t know what to look for.

matrigvals

I’m not much for “math can be used in the real world” lectures, but trigonometry is the one class where I can be all, “in your FACE!” when kids complain that they’d never see this in real life.

maisuzu

I stole the above concept from a trig book and converted to multiple answer, but the one below I came up with all by myself, and there’s all sorts of ways to take it. (and yes, as Mark Roulo points out, it should be “the B29’s circumference is blah blah blah.” Fixed in the source.)

mapropspeed

Some other questions for Algebra II, although they can easily be beefed up for pre-calc.

maparlinesys

maparabolaeq

One of the last things I do in creating a test is consider the weight I give each question. Sometimes I realize that I’ve created a really tough question with only five answer choices (my minimum). So I’ll add some easier answer choices to give kids credit for knowledge, even if they aren’t up to the toughest concepts yet.

That’s something I’ve really liked about the format. I can push the kids at different levels with the same question, and create more answer choices to give more weight to important concepts.

The kids mostly hate the tests, but readily admit that the hatred is for all the right reasons. Many kids used to As in math are flummoxed by the format, which forces them to realize they don’t really know the math as well as they think they do. They’ve really trained their brains to spot the correct answer in a multiple choice format–or identify the wrong ones. (These are the same kids who have memorized certain freeform response questions, but are flattened by unusual situations that don’t fit directly into the algorithms.)

Other strong students do exceptionally well, often spotting question interpretations I didn’t think of, or asking excellent clarifications that I incorporate into later tests. This tells me that I’m on the right track, exposing and differentiating ability levels.

At the lower ability levels, students actually do pretty well, once I convince them not to randomly circle answers. So, for example, on a rational expression question, they might screw up the final answer, but they can identify factors in common. Or they might make a mistake in calculating linear velocity, but they correctly calculate the circumference, and can answer questions about it.

I’ve already written about the frustrations, as when the kids have correctly worked problems but didn’t identify the verbal description of their process. But that, too, is useful, as they can plainly see the evidence. It forces them to (ahem) attend to precision.

Of course, I’m less than precise myself, and one thing I really love about these tests is my ability to hide this personality flaw. But if you spot any ambiguities, please let me know.


Ian Malcolm on Eva Moskowitz

malcolmquote1

Another good piece documenting the lack of “there” at the Success Academy schools, this one by Kate Taylor at the Times.

Pretend that Judge Patrice Lessner is interrupting me every four words for this next bit:

Success Academies’ “success” will eventually be revealed as a chimera. Certainly they are skimming on a massive scale, and their attrition rates over time are pretty telling. Despite Moskowitz’s constant denials,the kids spend a shocking amount of time in test prep—one witness even saw an early slam the exam class.

But skimming, test prep, and attrition don’t explain enough. If Carol Burris is providing correct information here, then 45% of whites were proficient in math, and 31% in ELA. According to Robert Pondiscio, the numbers for the overwhelmingly low income black and Hispanic Success Academies were over 90% and 68%, respectively. That suggests the schools are doing more than cherrypicking.

I don’t know how. Unlikely to be anything as obvious as fixing the tests later or telling the kids the answers, or we’d hear about it. Possibly they are engaging in the Chinese variety of test prep.

But if low income black and Hispanic proficiency rates are twice that of whites, then the dinosaurs have escaped.

Paul Bruno is more careful, less intuitive (in his writing) and far more data-driven than, say, me. So maybe everyone doesn’t read his explication of everything we don’t know about Success Academy as howlingly skeptical, but nor would anyone see the piece as a ringing endorsement. More surprisingly, Robert Pondiscio asks “what the hell is going on at Success Academy? in a way that doesn’t sound very flattering.

In no way are Bruno or Pondiscio going out on the ledge with me. Not for them the wise words of Ian Malcolm. I’m just saying that their articles signal considerable skepticism to me, a frequent reader of both.

I haven’t seen many respectable reformers touting Success Academy, either. Take that as you will.

Here’s a story idea for some enterprising reporter:

Contact Success Academy and ask to see score progressions for their early students. Presumably, all the students didn’t come in scoring at the top level (don’t laugh, skeptics!). So Eva and her minions should be able to provide initial scores for students–they are testing them constantly, yes?–and connect these scores to their actual state exam scores. By year. Then that enterprising reporter should track down Success Academy alumni and get their scores year by year since they’ve left. In a year, that could include SAT/ACT scores.

This would provide actual data to answer the following questions:

  1. Are the weakest students leaving the schools?
  2. Are specific students improving their demonstrated abilities during their tenure at the schools?
  3. Are alumni still doing well after they leave school?

Those questions would eliminate or at least reduce the charges of skimming, attrition, and prepping-to-the-extent-of-cheating.

I note that Kate Taylor or the Times is looking for students or parents to “share their stories”. Less stories. More data. Get test scores over time per student, stat!

If I’m wrong, nothing happens! No one gets fired. I’m just an amateur. It’s not like I’m claiming a frat party instigated a gang rape, or anything. And oh, yeah, the achievement gap that has plagued our education efforts for over fifty years has finally been beaten.

So if I’m wrong, someone should go look for Isla Nublar to see if the T-Rex has eaten all the velociraptors.