Aug 07 2011

What The ‘F’?

Follow garyrubinstein on Twitter

Anyone following this blog for the past few months know that, to me, the most disturbing thing about the current ed reform movement is the heartless shutting down of neighborhood schools, based mainly on test scores.

Despite Arne Duncan’s claims that he’s seen too many 90/90/90 schools (90% poverty, 90% graduation rate, 90% achievement) to believe that it takes anything more than harder-working teachers to close the achievement gap in high poverty schools, he has yet to produce even ONE non-selective school that has such statistics.

Diane Ravitch called him and other politicians out on this in The New York Times a few months ago, and you don’t hear the reformers hailing miracle schools as much as they used to. This makes me happy since less dishonesty means that ‘failing’ schools don’t have to fight for their lives while trying live up to a mythical ideal.

But the reformers saw this coming, so I’ve noticed, recently, a new party line. Rather than focus on the absolutes: the percent of proficient students, regardless of poverty, in the old no-excuses, poverty-is-not-destiny, your-future-is-not-determined-by-your-zip-code, they will now focus on ‘gains.’  How much students progressed in a year, relative to their starting point.

One example of this is the teacher evaluation system that Michelle Rhee implemented in D.C. when she was chancellor there. About 28 minutes into this video she describes how 50% of the teacher evaluations were not based on absolute metrics, like getting 90% proficient, but on the ‘gains’ of the students.  This is what’s now known as value-added.  How much did the students progress.

Though this certainly sounds more fair than asking teachers to get impossible proficiency percents, there are still a lot of problems with these inaccurate tests.  And even if the tests were accurate, how can a teacher be blamed (held accountable) for kids, for instance, who missed much of the school year for poverty-related issues?  When this IMPACT program was implemented, they used it to fire over 200 teachers this year — all from high-poverty schools.  You could argue that this makes sense since only the worst teachers are unable to get jobs at the better schools.  But I’d say, having taught at many different schools, that the teachers at the high-poverty schools are, on average, better than the ones in the low-poverty schools.  They are just working with a more challenging group so it is tougher to get those gains.

As New York City, where I live and teach, works to shut down more and more schools, I just discovered a truly bizarre use of this kind of ‘value-added’ process which factors highly into the annual school report card grade which, in turn, factors highly into the decision to shut down schools.

You won’t believe this.

Starting in 2010, New York City schools made their school report card grade based on the following considerations 15% school environment, 25% student achievement (percent proficient), and a whopping 60% on ‘student progress’.  Up until the 2009-2010 school year, this ‘growth’ was based on student proficiency from one year to the next.  So if all the students had a 2.3 on math in 3rd grade and then they all had a 2.3 on math in 4th grade, they all grew by a year in a year’s time.

But for the 2009-2010 school year, according to their website, they changed it to this:

Student Progress (60% of overall score): measures how student proficiency has changed in the past year. Progress indicators track the yearly gain or loss in ELA and mathematics proficiency of the same students as they move from one grade to the next at the school. A student’s growth percentile indicates the percentage of students, starting at the same test score, whom the student’s growth exceeded. These measures focus on the capacities students develop as a result of attending the school, not the capacities they bring with them on the first day. The metric is calculated for all students and for students in each school’s lowest one-third, as determined by the previous school year’s ELA and Math proficiency ratings. Each of these four metrics counts for 15% of the total score.

So what this means is that they now measure the growth of the entire school like this.  For each student they take their score from the previous year, let’s say that student A got a 2.3 on the 3rd grade math assessment, and they take that student’s score on the 4th grade math assessment, let’s say they got a 2.2, and then they figure out what percent of students in the city who also got a 2.3 on 3rd grade got lower than a 2.2 in 4th grade.  Let’s say that 60% of all the kids that got 2.3 in 3rd grade got less than a 2.2 in 4th grade.  So for that kid, the score would be 60%.  They then do this with every kid in the school and take the median of all those scores.  So the entire school is based on the one kid at the median and how he/she did in comparison to all the other kids in the district who got the same starting score.  They do this for math and ELA and then they do it for just the bottom third of the students in the school to get 4 scores that make up the majority of the school report card grade.

Now, the first thing to consider is what is a good score for this?  50% must be good since it means that the kids (or at least the kid) did better than half the kids who had the same starting point.  It seems like 40% wouldn’t be too bad either.

Well, I downloaded the complete 2009-2010 school report card spreadsheet to see how the five middle and elementary schools that received ‘F’s fared in this huge category.  (Good reading strategy:  make some predictions about what I might say next …)

School Math % ELA % Bottom 1/3 Math % Bottom 1/3 ELA %
Academy of Collaborative Education 50 46 63.5 72
Cornerstone Academy for Social Action 56 50.5 48 56
Fredrick Douglass Academy IV 43 51.5 64 67
P.S. 332 45.5 64 51 65
Community Roots Charter School 62 44.5 77 46.5

Now, I’m just a school teacher so I don’t know nuthin’ about no statistics, but these scores seem pretty good.  Anything even near 50% seems commendable, and many of these are way above 50%.  And this comprises of 60% of their school report card, and these are the only five schools to get ‘F’s.

Seems strange to me.

11 Responses

  1. Anne

    John Ewing, former director of the American Mathematical Society and now director of awesome Math for America, explains many of the problems with value-added measures:

  2. Edison Schools was a master of the “gains” claims 10+ years ago. I co-ran a volunteer research/info project on Edison in 2001. My colleague was really good at fine-tooth-combing the numbers. The Edison school hear in SF was touting its (disaggregated) African-American students’ gains, and she discovered that they were only reporting test scores for 73% of their African-American students.

    They were doing this, and the local press was parroting it, when their school was actually ranked dead last in our district. I’m not a fan of rating schools by test scores, but they were the ones who decreed that was the standard, and then they gamed it with fraud.

  3. Nyceducator


    Quick clarification: 90-90-90 actually refers to 90% poverty, minority and meeting standards.

    • Gary Rubinstein

      I agree that this was the original definition, but since the poverty/minority thing was so closely related, it evolved to have different meanings. Duncan recently used it as 95-95-90 where it meant poverty/grad rate/going to college rate. The main reason that I like to include graduation rate when I discuss it is the fact that some charters get achievement up through attrition. It’s hard to keep the kids and get those standardized test scores up too.

  4. B

    That about sums it up, right Gary?

    • Gary Rubinstein

      Thanks. I hadn’t seen that clip.

  5. “A student’s growth percentile indicates the percentage of students, starting at the same test score, whom the student’s growth exceeded.”

    Pardon me, but won’t half of all students score in the bottom 50%? ISN’T THAT WHAT MEDIAN MEANS?? Is the goal to evaluate or rank the schools?

    When did education become a zero sum game?

    • Gary Rubinstein

      This one’s for evaluating the schools.

      • To reiterate Scott’s point – what’s the use of a measure that tells you relative gain rather than absolute? The question should be is a school helping the kids make adequate progress, not are they better than other similar schools.

    • LOL! You are absolutely right about the median. Most people don’t know that and certainly don’t take the time to research anything beyond what they read in papers.

About this Blog

By a somewhat frustrated 1991 alum

High School

Subscribe to this blog (feed)

Subscribe via RSS


Reluctant Disciplinarian on Amazon

Beyond Survival On Amazon

RSS Feed