Monthly Archives: June 2013

An Asian Revelation

So regular school is over and I’m back teaching Asian summer school, otherwise known as Book Club/PSAT. Week 1, I had still been teaching, so I only covered the afternoon class in PSAT prep. Week 2 was the first week we had both classes, and as usual, I started out with a lecture that goes something like this:

“Anyone in here have a GPA below 3.9?”

No hands.

“Yeah. Okay, you’re sick little punks.” They laugh. No, really. “So you just wrote an essay about goals, and you all said you wanted to become better readers and writers, and I know you said that because you think that’s what I wanted to hear, even though I told you otherwise. What you really want, most of you, is an A. And that’s what your parents want, too.”

Laughs again.

“But here’s the thing: I don’t grade you. There is no A to be gotten here.”

Silence.

“So that’s what you have to consider, boys and girls, Chinese, Korean, Indian, Vietnamese, and my lone Nepalese. What does it mean to do well in a class that doesn’t have grades? How do you actually become a better reader and writer?”

Silence.

“I’m waiting.”

“Um. A higher PSAT score?”

“Hahahahaha. That’s a good one. Come on. Raise your hand if you personally know someone who hates to write essays, hates to read for fun, and got over 700 on the Reading/Writing PSAT. Oh look, everyone’s got their hand up. Hell, Sonya here is in 8th grade, she took the SAT twice last year for CTY and got, what, 550? 560? on reading and writing? Without prep. (It goes without saying that Sonya’s math score was over 700.) If all you want is higher SAT scores, come back next year for boot camp.”

“But my parents want me to do something other than watch TV this summer.” from Sam.

“You have TV?”

“Well, not cable, but I have a computer and I watch hulu.”

“So really all you want is higher PSAT scores?”

“….No. I really want to do something other than play computer all day, and I get to hang out with friends. Plus….I always am close to getting a B in English.”

“I actually got a B last semester,” this from Wan “and my parents were not happy.”

“Okay. So here’s the thing. You’re still talking metrics, grades, scores. I don’t have those. So if there are no grades, no scores, how do you know if you become a better reader or thinker?”

Carmella raises her hand. “I’ll know how to write essays. Like, when I have to write an essay on social justice and To Kill a Mockingbird I’ll know what to say.”

“How?”

She starts to backtrack. “No, no, Carmella, that was a great answer. That’s a good goal. I’m asking you how you achieve it. How do you know what to say?”

Karthi: “Improve your grammar?”

“Really? Knowing correct comma placement will help you convince some annoyingly liberal English teacher that you give a crap about the damage done by segregation and white prejudice?”

“Well, at least I would get a higher grammar score on the rubric.”

“Ah, which brings up another point. What does it mean to be a better writer? Do I teach you how to make a perfect cursive Z? Lorna?”

“There’s, like, grammar and stuff, and then there’s knowing what to write.”

“True. So at least two ways of becoming a better writer. First, the actual quality of your written expression: be it grammar, vocabulary, varied sentence structure. Second…..?”

“So like how you say it and….what you say?”

“That works. Okay, so let’s take it as read that you will learn the rules of grammar and punctuation and get a higher score on that section of the rubric.”

“And will learning more vocabulary make me a better writer?”

“Sure, if you internalize the vocabulary knowledge. It’s not something you can do with a test score.”

Saba: “Yeah, but if I do better on tests I’ll have more vocabulary.”

“You will? Huh. Let’s put that aside for a minute. How do you know what to write?”

Alan: “That’s what I was going to ask! How does a better vocabulary help me know how to analyze literature?”

“It doesn’t. What do you need in order to analyze literature.”

“I need to know how to analyze, what to analyze.”

“And now we come to my favorite mantra. You are saying, Alan, that you are happy to learn how to write, but you don’t know what to write.”

“Yes!” the whole class is nodding.

“Which leads me to some terrible news. Writing is thinking.”

Silence.

“See, when you say you don’t know what to write, you are actually saying…..”

“I don’t know what to think.”

“Bingo.”

“Crap.”

“Indeed. How many of you google other essays and, please god, don’t copy them directly but take the ideas and rewrite them?” A few hands go up. “Yeah. DON’T DO THAT.”

“But I have no idea what to write.”

“Okay. So when you say you want to become a better writer, you are actually expressing the need to…”

“Become a better thinker?”

“Now, I realize I’m talking to a crew who doesn’t want an opinion per se, they just want to know what their teacher wants them to say.” Far too many nods. “But this particular teacher wants you to say what is on your mind.”

“But what if there’s nothing there?”

“Welcome to adolescence, puppy. But seriously, a big part of this class will involve you thinking. And if you don’t know what to think, then I’d rather you write articulately and carefully about why you don’t know what to think, instead of making something up.”

“And that will help my vocabulary?”

“Indirectly. But what also helps your vocabulary is thinking about words. Form opinions about words. Connections to words. Remember stories I tell you about words, phrases. Like, for example, what did I say about the word ‘dint’?”

“By dint of.”

“Which means…”

“Um, by that way of doing it, or something? So you’d say ‘by dint of working my butt off, I finished the essay on time.'”

“Okay. So memorizing vocabulary will not help you. But if you think about words, if you do the homework assignments I give you thoughtfully and google usage and spend time on the process, you will slowly form memories around the words and, over time, improve your vocabulary.”

“But that’s really slow.”

“Yeah, it is. One last thing: learning vocabulary for reading is entirely different from learning vocabulary for writing. In reading, approximations do just fine. Aggregate, monolithic, bevy all have something to do with groups. Castigate, chastise, berate, reprove, admonish all have something to do with criticize. In reading, that’s all you need to know in order to dramatically increase your comprehension of the material. But using vocabulary in writing is a whole different story. So when I assign ten sentences using vocabulary words, and you write ‘I collected a monolithic of shells’, I will not be happy. I want well-written sentences, sentences that imply the meaning of the vocabulary word chosen, and I want precision in definition.”

And they nod, and I know they didn’t understand a word I said, really, but I feel better for saying it.

So yesterday, I gave them a vocabulary quiz, which I only do to make the parents and school happy. I gave them notice, so they all studied hard. They were surreptitiously studying during class, which is insane, and it was all for nothing.

Because here was the test:

All of these phrases describe moods. Your vocabulary list contained words that accurately characterize these moods.

  1. Shocked disbelief
  2. Alert watchfulness
  3. Uncaring, uninvolved
  4. Blissful happiness
  5. Dismayed disbelief
  6. Argumentative, easily angered
  7. Cranky, whiny
  8. Sensible, wise
  9. Passionate, enthusiastic
  10. Caring, concerned

The kids get out their lists. “No lists.”

“So where’s the word bank to choose from?”

“Not giving you one.”

“It’s not multiple choice?”

“DID YOU PEOPLE LISTEN TO A WORD I SAID THREE DAYS AGO????????”

“But I…”

“What do you think I meant by THERE IS NO GRADE? This isn’t a frigging test to get an A on and then forget. You all told me you wanted a stronger vocabulary. Well, then. This test is designed to make you THINK about vocabulary, what words you learned, what words might qualify for these definitions. Write down all the words you remember studying, and their definitions, and, just to reiterate, THINK ABOUT THE WORDS and what they mean. In fact, if you know a word not on the list that describes one of those moods, that’ll do just fine.”

I can’t say they did a great job, but the kids were quite pleased that they got any at all right, and bragged about it after class.

“I figured out five!”

“Man, I only got four, but I didn’t link solicitous to concerned. I should have.”

“I can’t believe it. I studied for like three hours, but I thought it’d be multiple choice! I only got 2!”

I love these kids. I really do. But realize that, speaking broadly about a large group, Asians’ grades and test scores do not reflect their actual abilities. Still, I’m doing my best to change that, fifteen kids at a time.


Algebra 1 Growth in Geometry and Algebra II, Spring 2013

This is part of an ongoing series on my Algebra II and Geometry classes. By definition, students in these classes should have some level of competence in Algebra I. I’ve been tracking their progress on an algebra I pre-assessment test. The test assesses student ability to evaluate and substitute, use PEMDAS, solve simple equations, operate with negative integers, combine like terms. It tiptoes into first semester algebra—linear equations, simple systems, basic quadratic factoring—but the bulk of the 50 questions involve pre-algebra. While I used the test at my last school, I only thought of tracking student progress this year. My school is on a full-block schedule, which means we teach a year’s content in a semester, then repeat the whole cycle with another group of students. A usual teacher schedule is three daily 90-minute classes, with a fourth period prep. I taught one algebra II and one geometry class first semester (the third class prepared low ability students for a math graduation test), their results are here.

So in round two, I taught two Algebra 2 courses and one Geometry 10-12 (as well as a precalc class not part of this analysis). My first geometry class was freshmen only. In my last school, only freshmen who scored advanced or proficient on their 8th grade algebra test were put into geometry, while the rest take another year of algebra. In this school, all a kid has to do is pass algebra to be put into geometry, but we offer both honors and regular geometry. So my first semester class, Geometry 9, was filled with well-behaved kids with extremely poor algebra skills, as well as a quarter or so kids who had stronger skills but weren’t interested in taking honors.

I was originally expecting my Geometry 10-12 class to be extremely low ability and so wasn’t surprised to see they had a lower average incoming score. However, the class contained 6 kids who had taken Honors Geometry as freshmen—and failed. Why? They didn’t do their homework. “Plus, proofs. Hated proofs. Boring,” said one. These kids knew the entire geometry fact base, whether or not they grokked proofs, which they will never use again. I can’t figure out how to look up their state test scores yet, but I’m betting they got basic or higher in geometry last year. But because they were put into Honors, they have to take geometry twice. Couldn’t they have been given a C in regular geometry and moved on?

But I digress. Remember that I focus on number wrong, not number right, so a decrease is good.

Alg2GeomAlg1Progress

Again, I offer up as evidence that my students may or may not have learned geometry and second year algebra, but they know a whole lot more basic algebra than they did when they entered my class. Fortunately, my test scores weren’t obliterated this semester, so I have individual student progress to offer.

I wasn’t sure the best way to do this, so I did a scatter plot with data labels to easily show student before/after scores. The data labels aren’t reliably above or below the point, but you shouldn’t have to guess which label belongs to which point.

So in case you’re like me and have a horrible time reading these graphs, scores far over to the right on the x-axis are those who did poorly the first time. Scores low on the y-axis are those who did well the second time. So high right corner are the weak students at both beginning and end. The low left corner are the strong students who did well on both.

Geometry first. Thirty one students took both tests.

Spring2013GeomIndImprovement

Four students saw no improvement, another four actually got more wrong, although just 1 or 2 more. Another 3 students saw just one point improvement. But notice that through the middle range, almost all the students saw enormous improvement: twelve students, over a third, got from five to sixteen more correct answers, that is, improved from 10% to over 30%.

Now Algebra 2. Forty eight students took both tests; I had more testers at the end than the beginning; about ten students started a few days late.

Spring2013A2IndImprovement

Seven got exactly the same score both times, but only three declined (one of them a surprising 5 points—she was a good student. Must not have been feeling well). Eighteen (also a third) saw improvements of 5 to 16 points.

The average improvement was larger for the Algebra 2 classes than the Geometry classes, but not by much. Odd, considering that I’m actually teaching algebra, directly covering some of the topics in the test. In another sense, not so surprising, given that I am actually tasked to teach an entirely different topic in both cases. I ain’t teaching to this test. Still, I am puzzled that my algebra II students consistently show similar progress to my geometry students, even though they are soaked in the subject and my geometry students aren’t (although they are taught far more algebra than is usual for a geometry class).

I have two possible answers. Algebra 2 is insanely complex compared to geometry, particularly given I teach a very slimmed-down version of geometry. The kids have more to keep track of. This may lead to greater confusion and difficulty retaining what they’ve learned.

The other possibility is one I am reminded of by a beer-drinking buddy, a serious mathematician who is also teaches math: namely, that I’m a kickass geometry teacher. He bases this assertion on a few short observations of my classes and extensive discussions, fueled by many tankards of ale, of my methods and conceptual approaches (eg: Real-life coordinate Geometry, Geometry: Starting Off, Teaching Geometry,Teaching Congruence or Are You Happy, Professor Wu?, Kicking Off Triangles, Teaching Trig).

This possibility is a tad painful to contemplate. Fully half the classes I’ve taught in my four years of teaching—twelve out of twenty four—have been some form of Algebra, either actual Algebra I or Algebra I pretending to be Algebra II. I spend hours thinking about teaching algebra, about making it more understandable, and I believe I’ve had some success (see my various posts on modeling).

Six of those 24 classes have been geometry. Now, I spend time thinking about geometry, too, but not nearly as much, and here’s the terrible truth: when I come up with a new method to teach geometry, whether it be an explanation or a model, it works for a whole lot longer than my methods in algebra.

For example, I have used all the old standbys for identifying slope direction, as well as devising a few of my own, and the kids are STILL doing the mental equivalent of tossing a coin to determine if it’s positive or negative. But when I teach my kids how to find the opposite and adjacent legs of an angle (see “teaching Trig” above), the kids are still remembering it months later.

It is to weep.

I comfort myself with a few thoughts. First, it’s kind of cool being a kickass geometry teacher, if that is my fate. It’s a fun class that I can sculpt to my own design, unlike algebra, which has a billion moving parts everyone needs again.

Second, my algebra II kids say without exception that they understand more algebra than they ever did in the past, that they are willing to try when before they just gave up. Even the top kids who should be in a different class tell me they’ve learned more concepts than before, when they tended to just plug and play. My algebra 2 kids are often taking math placement tests as they go off to college, and I track their results. Few of them are ending up in more than one class out of the hunt, which would be my goal for them, and the best are placing out of remediation altogether. So I am doing something right.

And suddenly, I am reminded of my year teaching all algebra, all the time, and the results. My results look mediocre, yet the school has a stunningly successful year based on algebra growth in Hispanic and ELL students—and I taught the most algebra students and the most of those particular categories.

Maybe what I get is what growth looks like for the bottom 75% of the ability/incentive curve.

Eh. I’ll keep mulling that one. And, as always, spend countless hours trying to think up conceptual and procedural explanations that sticks.

I almost titled this post “Why Merit Pay and Value Added Assessment Won’t Work, Part IA” because if you are paying attention, that conclusion is obvious. But after starting a rant, I decided to leave it for another post.

Also glaringly on display to anyone not ignorant, willfully obtuse, or deliberately lying: Common Core standards are irrelevant. I’d be cynically neutral on them because hell, I’m not going to change what I do, except the tests will cost a fortune, so go forth ye Tea Partiers, ye anti-test progressives, and kill them standards daid.


Just Another Meaningless Policy Paper

I read so many reports that are utterly moronic from start to finish, with countless foolish assumptions and unfounded premises. Most of the time I can’t be bothered. But I don’t want to grade papers, so I thought I’d fisk this.

Are Schools Getting a Big Enough Bang For Their Education Technology Buck?

Let’s assume they aren’t. But that’s not the point.

Start with the author, Ulrich Boser:

Prior to joining the Center, Boser was a contributing editor for U.S. News & World Report, special projects director for the Washington Post Express, and research director for Education Week newspaper. His writings have appeared in The New York Times, The Washington Post, Slate, and Smithsonian.

So Ulrich is a reporter. And if google is any guide, Ulrich was not an education reporter. Just exactly the kind of background needed to make recommendations about an insanely ambiguous subject like education technology spending. Well done, Center for American Progress.

On to the paper.

For American companies, leveraging digital solutions has long been a way of doing business, and over the past sixty years, the approach has resulted in average worker productivity climbing by more than 2 percent a year due in large measure to improvements in equipment, computers, and other high-tech solutions.

Educators, however, generally do not take this approach to technology. Far too often, school leaders fail to consider how technology might dramatically improve teaching and learning, and schools frequently acquire digital devices without discrete learning goals and ultimately use these devices in ways that fail to adequately serve students, schools, or taxpayers.

Do we have any idea how technology might dramatically improve teaching? Waiting. Still waiting.

No. We don’t. So how can schools consider this?

Meanwhile, are schools pushed to use technology? Yes. Are they pushed to use technology even though no one knows how to improve teaching with it? Yes. Do schools often get money shoved at them that they must use for technology even if they would rather use the money for other purposes? Again, yes. Report to the President on the Use of Technology to Strengthen K-12 Education in the United States

Oh, and the idea that American companies have a clear-eyed vision of how technology can improve their business, that all their tech investments are made with a hard-eyed assessment of the value the technology brings to the bottom line? Please. I worked in corporate America. Major investments sometimes took decades to pay off, or never did. One thing that wonks and educators have in common: they have no idea how much waste happens in business.

We found, for instance, that more than a third of middle school math students regularly used a computer for drill and practice. In contrast, only 24 percent of middle school students regularly used spreadsheets—a computer application for data analysis—for their math assignments, and just 17 percent regularly used statistical programs in math class.

Yeah, newsflash: Teachers don’t want their kids using Excel for their math homework. You know, there’s this whole other group of people who fulminate about kids and their utter reliance on calculators? Excel is a calculator. Moreover, the primary function of middle school math, assuming the kids are operating at grade level, is understanding proportional thinking. Excel does not do fractions or ratios well for the novice in proportional thinking, who has to start making connections between ratios and fractions, fractions and percentages, fractions and decimals. Excel is useless. Finally, vanishingly few middle school students (or high school students, for that matter) are capable of data analysis. One area in which progressives and reformers think as one is in their wholesale delusion that teachers could teach more challenging material, but simply choose not to.

These data varied widely across the nation. In Louisiana almost 50 percent of middle school math students said that they regularly used a computer for drill and practice. In Oregon that figure was just 25 percent.

Computers are introduced for drill and practice in regions with many, many low-skilled students, particularly in regions that are getting a lot of philanthropic attention. Are there, perhaps, demographic differences that suggest Louisiana students might need more drill and practice than Oregon students?

States are not looking at what sort of outcomes they are getting for their technology spending.

What outcomes would prove that technology spending is leading to better results? Higher test scores? What evidence is there that technology spending leads to higher test scores? Jeez, I dunno. How many articles have you seen like this one? I tried to find conclusive research on computer aided instruction, which is the most likely to have a direct impact on test scores, and I can’t find the knockout punch. And remember, Ulrich and his employers don’t like computer drill.

Lots of criticism, but I can’t see much indication as to what, exactly, we should be looking for in our technology spending that would allow us to say hey, look, it was worth the bucks! And of course, since everyone is looking to close the achievement gap and it almost certainly can’t be closed, education technology is probably doomed to fail.

We found that students from high-poverty backgrounds were far less likely to have rigorous learning opportunities when it comes to technology. Forty-one percent of eighth-grade math students from high-poverty backgrounds, for instance, regularly used computers for drill and practice. In contrast, just 29 percent of middle school students from wealthier backgrounds used the computers for the same purpose. We also found that black students were more than 20 percentage points more likely to use computers for drill and practice than white students.

In Geometry, for reasons passing understanding, we teach the Law of Syllogism. If x, then y. If y, then z. Ergo, if x, then z. Blacks and high poverty students are more likely by huge percentages to have weak skills. Weak skills are hoped to be improved by drilling with computers. Ergo…..

We found similar issues at the high school level here as well. We further noted racial disparities when it comes to computer use. Sixty-eight percent of white students regularly used computers for science class, compared to sixty percent of Hispanic students. Students of color were also less likely to have access to hands-on science projects, and just 37 percent of black students had experienced hands-on activities with simple machines in their science class over the past year. In contrast, 40 percent of white students and 45 percent of Asian students reported having such experiences.

Oh, come on. 68 vs 60? 37 vs. 40 vs. 45? Seriously?

Computers, tablets, and other devices can help boost the reach of highly effective teachers, allowing more students to study with the best math and reading teachers, for instance. Several schools have successfully experimented with such reforms, and in various forms, the schools will allow highly effective teachers to focus less on administrative duties and more on teaching. Under this approach, schools will often use support staff to take over noninstructional activities for highly effective teachers such as their lunch and recess duties, while more effective teachers take on responsibility for more students.

Cite? What schools have “successfully experimented” with these reforms? Is Ulrich talking about Rocketship? Because if he is, does he know that Rocketship Academy is all about putting Hispanic children on computers and drilling them on math facts? Which elsewhere in this article Ulrich implies is a Very Bad Thing?

In a way these findings are not surprising. We know that students of color and students in high-poverty schools are allocated less money per student, and they are far less likely to be taught by effective teachers. These factors all contribute to the nation’s large achievement gap where, on average, black and Latino students are academically about two years behind white students of the same age.

hahahahahaa. Yes, it’s bad teachers that lead blacks and Latinos to have lower achievement. Bad teachers are so pervasive that high income blacks and Hispanics do worse or tie with low income whites.

We are certainly not arguing for the nation to stop or slow funding for education technology.

Why aren’t you? We have no idea whether educational technology improves outcomes, or what goals we have. So why should we be spending billions on technology if we don’t know whether or works or what we want it to do?

It is imperative that students graduate from high school knowing how to effectively use technology. At minimum, high school graduates should have the skills to create a spreadsheet and calculate simple formulas such as averages and percentages.

All high school graduates can create a spreadsheet. Most high school graduates do not really understand percentages, with or without a spreadsheet. That’s because we’re too busy pretending to teach them second year algebra, trigonometry and pre-calculus. And if we stopped pretending and only taught the kids who could actually learn those subjects, whilst teaching the kids who didn’t understand percentages how to work with proportions, Ulrich would be at the front of the line of people castigating schools for their racist attitudes and simplistic education for children of color.

Equally crucial is the need to increase access to technology for all students, particularly ones from disadvantaged backgrounds.

Hard to see why, really. We don’t have jobs for low ability kids, technology or no.

Technology is clearly fulfilling some of its promises. Virtual schools, for example, are offering students more course and curriculum options than conventional schools. Many virtual schools also appear to serve students relatively well. When the U.S. Department of Education conducted a detailed review of virtual education studies of both K-12 and higher education efforts, they found that students in online education actually performed slightly better than students who received face-to-face education. As the Department of Education report concluded, “[t]he meta-analysis found that, on average, students in online-learning conditions performed modestly better than those receiving face-to-face instruction.” But the report also cautioned that the increased achievement that is “associated with blended learning should not be attributed to the media, per se,” because of methodological issues.

So he starts by saying that technology is fulfilling its promise, cites a report that he says supports such a claim. Then he admits that the study actively warns against drawing any such conclusion. Does he then cite another report? No. So technology is NOT clearly fulfilling some of its promise.

So then Ulrich points out three apparently obvious recommendations:

  1. Policymakers must do more to make sure that technology promotes key learning goals. But we already know that the link between technology and educational outcomes is practically non-existent. I very much doubt that there’d be a lot of takers for a technology project that didn’t promise to improve educational outcomes. So based on past experience, policymakers should not support technology at all. The reason they support technology projects is the same one driving this idiotic report: happythink.
  2. Schools must address the digital divide. So schools MUST spend more money on technology for poor kids, even though they have no idea what they want and little in the way of evidence that increased technology spending improves scores or technology competence. And they definitely shouldn’t use computers for drilling.
  3. Advocates must push for studies of the cost-effectiveness of technology. In order to judge effectiveness, we’d need to have goals. And if the goal is “improve outcomes” then there’s little evidence now that technology does this, so perhaps we should ratchet back until we either have different goals or have evidence that technology spending improves achievement. But Ulrich and his people don’t want us to ratchet back on spending.

As I said, I have no larger point with this. I just had the motivation to write up my complaints, the better to avoid grading.

But what creates this nonsense? I assume these places have to generate meaningless position papers so their owning philanthropists think their money is well-spent? But who is evaluating their investments for effectiveness?

On ed tech itself, I would quote Larry Cuban, Does Online Instruction Work?:

These policymakers are not irrational [for pushing technology]. There is a political logic in mandating online courses for every student as a graduation requirement, starting pilot tablet and laptop programs, and encouraging a principal and cadre of teachers to create a technological innovation tailored to their school They consult with key stakeholders in the community before inviting charter management organizations like Rocketship Schools to establish blended learning programs in their schools. These decision-makers do not need researchers to tell them that these new technologies “work.” They believe in their heart that they will work. Push-and-pull conflicting urges pit solid research studies against strong beliefs and leave unanswered the question of what kinds of evidence matter. Too often beliefs trump facts. (emphasis mine)

So Ulrich and company aren’t going to get their policy recommendations any time soon.


Why Merit Pay and Value Added Assessment Won’t Work, Part I

The year I taught Algebra I, I did a lot of data collection, some of which I discussed in an earlier post. Since I’ve been away from that school for a while, I thought it’d be a good time to finish the discussion.

I’m not a super stats person. I’m not even a mathematician. To the extent I know math, it’s applied math, with the application being “high school math problems”. This is not meant to be a statistically sound analysis, comparing Treatment A to Treatment B. But it does reveal some interesting big picture information.

This data wasn’t just sitting around. A genuine DBA could have probably whipped up the report in a few hours. I know enough SQL to get what I want, but not enough to get it quickly. I had to run reports for both years, figure out how to get the right fields, link tables, blah blah blah. I’m more comfortable with Excel than SQL, so I dumped both years to Excel files and then linked them with student id. Unfortunately, the state data did not include the subject name of each test. So I could get 2010 and 2011 math scores, but it took me a while to figure out how to get the 2010 test taken—and that was a big deal, because some of the kids whose transcripts said algebra had, in fact, taken the pre-algebra (general math) test. Not that I’m bitter, or anything.

Teachers can’t get this data easily. I haven’t yet figured out how to get the data for my current school, or if it’s even possible. I don’t know what my kids’ incoming scores are, and I still haven’t figured out how my kids did on their graduation tests.

So the data you’re about to see is not something teachers or the general public generally has access to.

At last school, in the 2010-11 school year, four teachers taught algebra to all but 25 of over 400 students. I had the previous year’s test scores for about 75% of the kids, 90% of whom had taken algebra the year before, the other 10% or so having taken pre-algebra. This is a slightly modified version of my original graph; I put in translations of the scores and percentages.

algallocdist

You should definitely read the original post to see all the issues, but the main takeaway is this: Teacher 4 has a noticeably stronger population than the other three teachers, with over 40% of her class having scored Basic or Higher the year before, usually in Algebra. I’m Teacher 3, with by far the lowest average incoming scores.

The graph includes students for who I had 2010 school year math scores in any subject. Each teacher has from 8-12 pre-algebra student scores included in their averages. Some pre-algebra kids are very strong; they just hadn’t been put in algebra as 8th graders due to an oversight. Most are extremely weak. Teachers are assessed on the growth of kids repeating algebra as well as the kids who are taking it for the first time. Again, 80% of the kids in our classes had taken algebra once. 10-20% had taken it twice (our sophomores and juniors).

Remember that at the time of these counts, I had 125 students. Two of the other teachers (T1 and T4) had just under 100, the third (T2) had 85 or so. The kids not in the counts didn’t have 2010 test scores. Our state reports student growth for those with previous years’ scores and ignores the rest. The reports imply, however, that the growth is for all students. Thanks, reports! In my case, three or four of my strongest students were missing 2010 scores, but the bulk of my students without scores were below average.

So how’d we do?

I limited the main comparison to the 230 students who took algebra for both years and had scores for both years and had one of 4 teachers.

scoreimpalg

Here are the pre-algebra and algebra intervention growth–pre-algebra is not part of the above scores, but the algebra intervention is a sub-group. These are tiny groups, but illustrative:

scoreimpother

The individual teacher category gains/slides/pushes are above; here they are in total:
myschooltotcatchg

(Arrrggh, I just realized I left off the years. Vertical is 2010, horizontal is 2011.)

Of the 230 students who took algebra two years in a row, the point gain/loss categories went like this:

Score change > + 50 points

57
Score change > -20 points

27
-20 points < score change < + 50 points

146

Why the Slice and Dice?

As I wrote in the original post, Teacher 1 and I were positive that Teacher 4 had much stronger student population than we did—and the data supports that belief. Consequently I suspected that no matter how I sliced the data, Teacher 4 would have the best numbers. But I wanted a much better idea of how I’d done, based on the student population.

Because one unshakeable fact kept niggling at me: our school had a tremendous year in 2010-2011, based largely on our algebra scores. We knew this all throughout the year—benchmark tests, graduation tests—and our end of year tests confirmed it, giving us a huge boost in the metrics that principals and districts cared about. And I’d taught far more algebra students than any other teacher. Yet my numbers based on the district report looked mediocre or worse. I wanted to square that circle.

The district reports the data on the right. We were never given average score increase. A kid who had a big bump in average score was irrelevant if he or she didn’t change categories, while a kid who increases 5 points from the top of one category to the bottom of another was a big win. All that matters were category bumps. From this perspective, my scores look terrible.

I wanted to know about the data on the left. For example Teacher 1 had far better “gain” category numbers than I did. But we had the same mean improvement overall, of 5%, with comparable increases in each category. Broken down further, Teacher 4’s spectacular numbers are accompanied by a huge standard deviation—she improved some kids a lot. The other three teachers might not have had as dramatic a percentage increase, but the kids moved up more consistently. In three cases, the average score declined, but was accompanied by a big increase in standard deviation, suggesting many of the kids in that category improved a bit, while a few had huge drops. Teacher 2 and I had much tighter achievement numbers—I may have moved my students less far, but I moved a lot of them a little bit. None of this is to argue for one teacher’s superiority over another.

Of course, once I broke the data down by initial ability, group size became relevant but I don’t have the overall numbers for each teacher, each category, to calculate the confidence interval or a good sample size. I like 10. Eleven of the 18 categories hit that mark.

How many kids have scores for both years?

The 2011 scores for our school show that just over 400 students took the algebra test. My fall 2010 graph above show 307 students with 2010 scores (in any subject) who began the year. Kick in another 25 for the teacher I didn’t include and we had about 330 kids with 2010 scores. My results show 230 kids with algebra scores for both years, and the missing teacher had 18, making 248. Another 19 kids had pre-algebra scores for the first year, although the state’s reports wouldn’t have cared about that. So 257 of the kids had scores for both years, or about 63% of the students tested.

Notice that I had the biggest fall off in student count. I think five of my kids were expelled before the tests, another four or so left to alternative campuses. I remember that two went back to Mexico; one moved to his grandparents’ in Iowa. Three of my intervention students were so disruptive during the tests that they were ejected, so their test results were not scored (the next year our school had a better method of dealing with disruptive students). Many of the rest finished the year and took the tests, but they left the district over the summer (not sure if they are included in the state reports, but I couldn’t get their data). I think I had the biggest fall-off over the year in the actual student counts; I went from 125 to 95 by year-end.

What about the teachers?

Teacher 1: TFA, early-mid 20s, Asian, first year teacher. Had a first class honors masters degree in Economics from one of the top ten universities in Europe. She did her two, then left teaching and is now doing analytics for a fashion firm in a city where “fashion firm” is a big deal. She was the best TFAer I’ve met, and an excellent new teacher.

Teacher 2: About 60. White. A 20-year teacher who started in English, took time off to be a mom, then came back and got a supplemental math credential. She is only qualified to teach algebra. She is the prototype for the Teacher A I described in my last post, an algebra specialist widely regarded as one of the finest teachers in the district, a regard I find completely warranted.

Teacher 3: Me. 48 at the time, white. Second career, second year teacher, English major originally but a 15-year techie. Went to one of the top-rated ed schools in the country.

Teacher 4: Asian, mid-late 30s. Math degree from a solid local university, teaches both advanced math and algebra. She became the department head the next year. The reason her classes are top-loaded with good students: the parents request her. Very much the favorite of administration and district officials.

And so, a Title I school, predominantly Hispanic population (my classes were 80% Hispanic), teachers that run the full gamut of desirability—second career techie from a good ed school, experienced pro math major, experienced pro without demonstrated higher math ability, top-tier recent college grad.

Where was the improvement? Case 1: Educational Policy Objectives

So what is “improvement”? Well, there’s a bunch of different answers. There’s “significant” improvement as researchers would define it. Can’t answer that with this data. But then, that’s not really the point. Our entire educational policy is premised on proficiency. So what improvement does it take to reach “proficiency”, or at least to change categories entirely?

Some context: In our state, fifty points is usually enough to move a student from the bottom of one category to the bottom of another. So a student who was at the tip top of Below Basic could increase 51 points and make it to the bottom of Proficient, which would be a bump of two categories. An increase of 50 points is, roughly, a 17% increase. Getting from the bottom of Far Below Basic to Below Basic requires an increase of 70%, but since the kids were all taking Algebra for the second time, the boost needed to get them from FBB to BB was a more reasonable 15-20%. To get from the top of the Far Below Basic category to Proficient—the goal that we are supposed to aim for—would require a 32% improvement. Improving from top of Basic to bottom of Advanced requires a 23% improvement.

Given that context, only two of the teachers in one category each moved the needle enough to even think about those kind of gains—and both categories had 6-8 students. Looking at categories with at least ten students, none of the teachers had average gains that would achieve our educational policy goals. In fact, from that perspective, the teachers are all doing roughly the same.

I looked up our state reports. Our total population scoring Proficient or Advanced increased 1%.

Then there’s this chart again:

myschooltotcatchg

32 students moved from “not proficient” to “proficient/advanced”. 9 students moved from “proficient” to “advanced”. I’ll throw them in. 18% of our students were improved to the extent that, officially, 100% are supposed to achieve.

So educational policy-wise, not so good.

Where was the improvement? Case 2: Absolute Improvement

How about at the individual level? The chart helps with that, too:

myschooltotcatchg

Only 18 students were “double gainers” moving up two categories, instead of 1. Twelve of those students belonged to Teacher 4; 4 belonged to Teachers 1 , while Teacher 2 and I only had 1 (although I had two more that just missed by under 3 points). Teachers 1, 2, and 3 had one “double slider” each, who dropped two categories.

(I interviewed all the teachers on the double gainers; in all cases, the gains were unique to the students. The teachers all shrugged—who knew why this student improved? It wasn’t some brilliant aha moment unique to that teacher’s methods, nor was it due to the teacher’s inspiring belief and/or enthusiasm. Two of the three echoed my own opinion: the students’ cognitive abilities had just developed over the past year. Or maybe for some reason they’d blown off the test the year before. I taught two of the three “double sliders”—one was mine, one I taught the following year in geometry, so I had the opportunity to ask them about their scores. Both said “Oh, yeah, I totally blew off the test.” )

So a quarter of the students had gains sufficient to move from the middle of one category to the middle of another. The largest improvement was 170 points, with about 10 students seeing >100 point improvement. The largest decline was 169 points, with 2 students seeing over 100 point decline. Another oddity: only one of these two students was a “double slider”. The other two “double sliders” had less than 100 point declines. My double slider had a 60 point decline; my largest point decline was 89 points, but only dropped one category.

However, the primary takeaway from our data is that 63% of the students forced to take algebra twice were, score-wise if not category-wise, a “push”. They dropped or gained slightly, may have moved from the bottom of one category to the middle of the same, or maybe from the top of one category to the bottom of another.

One might argue that we wasted a year of their lives.

State reports say our average algebra score from 2010 to 2011 nudged up half a point.

So it’s hard to find evidence that we made much of a difference to student achievement as a whole.

I know this is a long post, so I’ll remind the reader that all of the students in my study have already taken algebra once. Chew on that for a while, will you?

Where was the improvement? Case 3: Achievement Gap

I had found no answer to my conundrum in my above numbers, although I had found some comfort. Broken down by category, it’s clear I’m in the hunt. But the breakdown doesn’t explain how we had such a stupendous year.

But when I thought of comparing our state scores from year to year, I got a hint. The other way that schools can achieve educational policy objectives is by closing the achievement gap.

All of this data comes from the state reports for our school, and since I don’t want to discuss who I am on this blog, I can’t provide links. You’ll have to take my word for it—but then, this entire post is based on data that no one else has, so I guess the whole post involves taking my word for it.

2010-11 Change
Overall

+

0.5
Whites

7.2
Hispanics

+

4
EcDis Hisp

1
ELL

+

7

Wow. Whites dropped by seven points, Hispanics overall increased by 4, and non-native speakers (almost entirely Hispanic and economically disadvantaged), increased by 7 points.

So clearly, when our administrator was talking about our great year, she was talking about our cleverness in depressing white scores whilst boosting Hispanics.

Don’t read too much into the decline. For example, I personally booted 12 students, most of them white, out of my algebra classes because they’d scored advanced or proficient in algebra the previous year. Why on earth would they be taking the subject again? No other teacher did this, but I know that these students told their friends that they could get out of repeating Algebra I simply by demanding to be put in geometry. So it’s quite possible that much of the loss is due to fewer white advanced or proficient students taking algebra in the first place.

So who was teaching Hispanics and English Language Learners? While I can’t run reports anymore, I did have my original file of 2010 scores. So this data is incoming students with 2010 scores, not the final 2011 students. Also, in the file I had, the ED and ELL overlap was 100%, and I didn’t care about white or black EDs for this count. Disadvantaged non-ELL Asians in algebra is a tiny number (hell, even with ELL). So I kept ED out of it.

 

Hisp

ELL
t1

30

21
t2

32

38
t3

48

37
t4

39

12

Well, now. While Teacher 4 has a hefty number of Hispanics, very few of them are poor or ELLs. Teacher 2 seems to have Asian ELLs in addition to Hispanic ELLs. I have a whole bunch of Hispanics, most of them poor and ELL.

So I had the most mediocre numbers, but we had a great year for Hispanic and ELL scores, and I had the most Hispanic and ELL students. So maybe I was inadvertently responsible for depressing white scores by booting all those kids to geometry, but I had to have something to do with raising scores.

Or did I? Matthew DiCarlo is always warning against confusing comparing year to year scores, which are a cross-section of data at a point in time, with comparing student progress at two different points in time. In fact, he would probably say that I don’t have a conundrum, that it’s quite possible for me to have been a crappy teacher who had minimal impact on student achievement compared point to point, while the school’s “cross-section” data, which doesn’t compare students directly, could have some other reason for the dramatic changes.

Fair enough. In that case, we didn’t have a great year, right? It was just random happenstance.

This essay is long enough. So I’ll leave any one interested to explain why this data shows that merit pay and value added scores are pointless. I’m not sure when I’ll get back to it, as I’ve got grades to do.


Teaching and Intellectual Property

So consider Teacher A and Teacher B.

Teacher A: Most days, the kids come in, teacher tells them to turn to a page in the book or gives a lecture, puts some notes on the board, works some examples, assigns problems to be done both in class and for homework.

Teacher B: Most days, the kids come in. Every thing else depends. Some days it’s an activity leading to notes leading to problems, some days it’s class discussion leading through a topic, some days it’s a whole bunch of problems practicing skills coming out of the activity or class discussions, some days it’s a little bit of all three. Every so often the book makes an appearance. Homework is simple and often distinct from the class sets.

Teacher A has carefully organized boardwork, copied from notes stored in a notebook or a lesson plan. The actual board is erased daily.

Teacher B has somewhat chaotic boardwork that is generated on the fly, and photographed at the end of class or whenever it is erased, which might be days later.

Teacher A generates tests using a software tool provided by the textbook publisher, or reuses tests created years ago, typed on a Selectric with hand-drawn diagrams.

Teacher B reuses tests, but tweaks them based on the classes for that year. Teacher B is an expert in Office or Google Docs or Open Office or whatever gets it done.

Teacher A has no idea how to use Office or Google Docs, or uses them infrequently, and wrinkles a confused brow at the notion of intellectual property.

Teacher B still shudders in horror at the near miss when a techie wiped a hard drive without realizing B didn’t have a network account, thus obliterating everything on the hard drive—which, thank all that’s holy, was nothing, because Teacher B stores an extensive, personally-developed curriculum library on Dropbox.

Of course, these practices are a spectrum that extend beyond Teachers A and B. I imagine somewhere in the world exist Teacher As using copied versions of an original mimeograph, and Dan Meyer and Fawn Nguyen are way out there in crazyville, totally unstructured gosh, math is something kids should DO not be TOLD about land, creating everything on the fly each day.

But here’s the point: Teacher B almost certainly puts in far more hours than Teacher A, spends a lot more time thinking about each day’s activities and how to craft a lesson specific to each classes’ needs. Teacher A teaches the subject, not the class.

Teacher A and B are paid by the same step and row scale. And that’s how it should be.

Most teacher contracts are very specific on hours: teachers shall be in the classroom from 0X:00 a.m. to 0Y:00 p.m. They shall sign up for Z hours of supervision duty. There are W hours committed to staff meetings and in-house professional development. Teachers have to be in class every single day unless blah blah blah.

Look up curriculum in a contract, on the other hand, and it’s very vague. Teachers shall go to professional development for multi-cultural curriculum. Or maybe teachers shall teach agreed-upon curriculum. Or sometimes new teachers shall meet with mentor teachers to consult on curriculum development.

Most contracts have a section on resolving disputes over “curriculum mandates”, when the districts require teachers teach one particular method, use one particular book, or follow one particular schedule.

Teacher evaluations are typically based on observations. Prior to the observations, they are often asked to submit lesson plans as evidence that they are considering the needs of all students: ELL, special ed, struggling, Hispanic/black. The administrator evaluates based on execution of the plan, as well as observed teacher qualities during the lesson: does the teacher constantly check for student understanding, are the students engaged, are the students behaving, and so on.

As everyone knows, reformers and politicians are anxious to change that evaluation process, because by golly, more teachers need firing. Firing more teachers is best accomplished by linking student outcomes to teachers, since teachers have less control over student outcomes than any other aspect of their performance.

So teachers are evaluated by planning, classroom performance and management and, possibly, student test scores.

Are they ever evaluated on the curriculum they develop? Is that part of the recent push? Compare google results for “teacher evaluation” “test scores” and teacher evaluation” “curriculum development” and it’s pretty clear that evaluating teacher’s personally developed curriculum is not on the horizon.

Of course, any teacher could tell you that. Teachers are not evaluated on the content of their classroom curriculum. They are not asked to submit examples of our personally developed curriculum. They aren’t asked to build curriculum as part of their jobs.

To put it in legal terms as I understand it, curriculum is not what teachers are hired to do. From Wikipedia:

A work made for hire (sometimes abbreviated as work for hire or WFH) is a work created by an employee as part of his or her job, or a work created on behalf of a client where all parties agree in writing to the WFH designation. It is an exception to the general rule that the person who actually creates a work is the legally recognized author of that work. According to copyright law in the United States and certain other copyright jurisdictions, if a work is “made for hire”, the employer—not the employee—is considered the legal author. In some countries, this is known as corporate authorship. The incorporated entity serving as an employer may be a corporation or other legal entity, an organization, or an individual.[1]

Andrew Rotherham has written about Teacher Pay Teachers, as has the NY Times, and both articles mention the legal aspects of teachers selling curriculum. Since districts are paying teachers to develop curriculum, the thinking goes, shouldn’t they own the curriculum? Apparently, one NY court said the district owned the curriculum because it provided the facilities on which the teacher developed the plans, but there’s little case law on the topic.

So I wrote up my case of Teacher A and Teacher B to articulate what seems to me the obvious argument in favor of giving teachers ownership of their intellectual property. Both teachers are doing the job they are paid to do. Teacher B is additionally developing curriculum. Teacher B is not hired to create curriculum, therefore the worksheets, activities, and the rest are not “work made for hire”.

As any contract makes obvious, teachers are paid for their hours in school. They are not tasked with developing curriculum, they aren’t evaluated on their individually developed curriculum. They are given a set of hours and objectives. How they complete the objectives, within given constraints, is largely up to them. That’s why curriculum mandates so often require mediation, because teachers are used to making their own classroom decisions and object when it’s imposed from the outside. Curriculum is ours.

To quote Rotherham again: What we consider schools are often just loose confederations of independent contractors, each overseeing his or her own classroom.

Notice the name is Teachers Paying Teachers. It’s not the districts or the schools buying the activities. Perhaps some of the teachers are turning around and billing the district, but I suspect most of them think of these purchases as their professional responsibility to find curriculum to engage their students. Some teachers just use the books. Some create their own activities. Some work together with their departments, sharing out curriculum responsibilities cooperatively (if you surveyed teachers, a plurality would choose this as their desired method, although very few schools seem to do this consistently.) Some turn to google. Others buy from other teachers. But it’s the teachers’ purview to make curriculum decisions.

The districts are entirely removed from this process. In all but a few cases, they aren’t giving teachers clearly delineated lesson plans and activity worksheets, daily schedules, tests—all perfectly aligned with their students’ actual abilities, not the pretense that we’re actually teaching Hamlet to kids who can read at a sophomore level, or second year algebra to kids who know the difference between a positive and negative slope. No, they provide books that teachers can choose to use or not, and in some cases benchmark dates for interim tests. On occasion, they will mandate professional development taught by middle school teachers who wanted out of the classroom. The teachers will show up and, usually, snicker politely. But when the door closes, the district is nowhere to be found, and it’s all on the teachers to decide on the daily lesson and teach what they determine is necessary.

So then, if a teacher is particularly good at developing lesson plans, sequences, or activities that other teachers spot and want to use, even pay for, then the district wants in on the money? Yeah, I think not.

I believe that even the issue of where the material is developed is irrelevant, although I can see a better case for that. Unless a teacher develops all material during a prep period, then the material was developed off the clock. If a teacher stays after school to build a great handout or activity for the next day, that time is unpaid. The district and school get the immediate benefit from the lesson–which is again what they pay the teacher for.

Consider, too, that teachers often reuse lessons and activities they developed at other districts. The districts see the benefits from this reuse free of charge. They aren’t required to pay the previous districts for the use of its computers or teacher time spent developing that material. I imagine these districts demanding ownership rights of curriculum have no interest in hunting down the previous districts to reimburse them for the value they are now getting.

Teacher intellectual property is an odd concept to discuss in a world that shows little respect for teacher brains or creativity. But I believe that a close reading of any contracts and the ample evidence of Teacher As and Teacher Bs, all getting the same money despite profoundly different work product, would show that teachers are paid purely for the time spent teaching, not the materials that they use to teach with. Therefore, any materials they create to teach are not work made for hire. And if a district has inserted contractual text saying otherwise, then it should be challenged on this.

Apparently the NEA agrees with me, so I doubt any such text is going to be showing up much in the future:

Furthermore, education employees should own the copyright to materials that they create in the course of their employment. There should be an appropriate “teacher’s exception” to the “works made for hire” doctrine, pursuant to which works created by education employees in the course of their employment are owned by the employee. This exception should reflect the unique practices and traditions of academia.

All issues relating to copyright ownership of materials created by education employees should be resolved through collective bargaining or other process of bilateral decision-making between the employer and the affiliate.

The ownership rights of education employees who create copyrightable materials should not prevent education employees from making appropriate use of such materials in providing educational services to their students.

I am, clearly, a Teacher B, so this is something I feel pretty strongly about. Not that I’d ever sell my lessons—I’m way too much of the tech open source tradition for that. You want it, ask me. It’s yours for everything but selling under your name. To the extent I want control over my intellectual property, I want it to a) prevent any district from benefiting from it monetarily and b) maybe put it in a book some day, if a publisher is ambitious.

But the larger point, I think, is what this means both for Common Core and the curriculum purists like Core Knowledge. Education reformers often don’t understand the point Rotherham makes: teachers are independent operators, particularly at the high school level. Enforcement of a particular curriculum is very nearly impossible. I’ve been focusing on the way curriculum breakdown happens at the teacher level, but Larry Cuban has an excellent essay, The Mult-Layered Curriculum, that lays out the other ways in which the curriculum goals break down.

So behind the issue of teachers’ intellectual property lies a much bigger issue: why do teachers have intellectual property? Why are they developing their own material? To many people—including a whole bunch of teachers—this is a problem. To others, including many Teacher As and all Teacher Bs, this is a feature. If you took away my ability to develop my own material, you would remove a lot of the joy I take in teaching. I’d still teach, I think, but many others of my ilk would not.

Think about this and before long it starts to become clear that education reformers constantly argue for two goals that are potentially in conflict: powerful standards that articulate a cohesive required curriculum and bright, creative, resourceful teachers. Because if the standards don’t have buy-in—and make no mistake, neither Common Core standards nor any curriculum like Core Knowledge have anything approaching buy-in—then bright, creative, resourceful teachers will develop their own curriculum and ignore anything they disagree with.

I am not arguing that all Teacher As are soulless drones and all Teacher Bs are mythical enchanting woodland sprites who make magic in their classrooms. Teacher As have intellectual property as well; it’s just harder to see. What I am saying is that the very notion of teacher intellectual property reveals the problems with any attempts to create broader standards or a common curriculum.

But on the basic point, I think things should be pretty clear: teachers are not paid to develop curriculum. Since curriculum isn’t work for hire, the worksheets, activities, and lesson sequences, and any other resource development is theirs to do with as they wish.