Jun 09 2011

Why does TFA value quantity over quality?

Another good question.

So the first question, again, is how do I know that TFA values quantity over quality?  First let me clarify that this means that TFA is more concerned with growing the corps than it is with making sure that the new teachers are effective.

I know this is true because TFA has published it on their own website. In their growth plan they list out their organizational priorities and the first one is not that the CMs are effective in instructing the hundreds of thousands of kids that they have been entrusted to ensure.  Instead, the first priority is ‘Grow to scale while increasing the diversity of the corps.’  Then for the second priority, thankfully, they have ‘Maximize the impact of corps members on student achievement.’

Now I have two problems with this priority list.  First is that it is crazy for TFA to not make increasing the impact of CMs in the classroom number one.  They say that the teaching part creates TFAs  ‘short term impact,’ which, in my mind, really minimizes how important it is that CMs are as effective as possible.

My other problem with their mixed up priorities is that even if it is more important for them to grow than it is for them to succeed, why would they admit that?  I generally ask TFA to be more honest, but in this case their failure to air-brush the truth makes me think that they don’t even understand why someone might question this ordering of priorities.

Still, this didn’t surprise me much.  For years I’ve noticed that TFA does a much better job recruiting than it does training.

I’ve recently been complaining that TFA keeps the data about their failures secret.  Actually, in doing some research for my previous post, I found a document they produce that explains a lot of the data I wanted.  For instance, it is revealed in here that the ‘quit rate’ that I had been speculating about was 11%, which means one in nine CMs don’t complete the two years.  Also in the same document they report the percentages of CMs who meet the different performance categories:  significant gains (1.5 years of growth), solid gains (1 year of growth), limited gains (less than 1 year of growth)

Describing how they measure who falls into which category, they have on the website this puzzling explanation  “Teach For America measures the percentage of corps members who can point to evidence that they have moved their students forward at least a year and a half’s worth of progress in a year’s time.”

So the CMs don’t actually have to ‘prove’ that they achieved these significant gains?  They just have to “point to evidence.”  How do they do that? [Note:  See comments from current CMs at the end of this post to get the answer to my mystery.]

So the document claims that in 2008 the percent of first year CMs getting significant gains was 30%.  In 2009 it was up to 39% and they projected 2010 to be 44%.  For solid and significant gains they said 55% in 2008, 65% in 2009, and a projected 70% in 2010.

I have to tell you that I don’t buy the whole year and a half thing.

They didn’t have these metrics when I was a CM, but as one of the most successful teachers in the history of TFA, I’m sure that, in my prime, I would have scored high in whatever system they had.  And as proud as I am of myself for having been such an amazing teacher, I can’t say that my students progressed a year and a half in one year.  And I think that’s fine.  That really wasn’t my goal.  I wanted to develop confidence and problem solving abilities and also to show kids that math is fun and beautiful.  Doing all that might not get them a year and a half of gains, but to me it was much more important.

I have to question their metric since if 40% of 2009 CMs got these ‘significant gains,’ then I’d say that something is wrong with the rating system.  Not to be a bummer, but do you realize how difficult it is to be a first year teacher?  Aside from the basics like teaching all your lessons for the first time while also trying to fit into the school community and learn basic things like how to fill out paperwork, it is completely physically and emotionally draining.  Even with my 13 years of teaching experience, if I had to transfer to a new grade level and had to make all new lessons and go through the process of realizing that my lessons weren’t as fine-tuned as I thought they were, I’d be completely worn out.

Imagining a bell curve of effectiveness, I speculate that the number of rock stars should approximately equal the number of people who quit, or about 11%.

If TFA wants to delude itself into thinking that 40% of new CMs are making a year and a half progress per year, then it makes sense that they don’t make improving training their top priority.  They’ve already accomplished it.

I’m not the only person, though, who thinks that TFA CMs are not as effective as these numbers suggest.  According to her new book, Wendy Kopp feels the same way. She says so in chapter four of ‘A Chance To Make History’

“our teachers are still not, on average, changing the trajectory of their students in a truly meaningful way.  With a lot of hard work, we are getting better, but we are not where we need to be:  The bell curve of effectiveness within our corps is still too wide.”

This, to me, seems to contradict the stats that 39% of first year 2009 CMs and 50% of second year 2009 CMs have gotten a year and a half in one year.

So, I guess we need to fix the training model?  Surprisingly, no.  A page later she explains

“And it would be misguided to assume that there’s an as-of-yet undiscovered route for teacher preparation or retaining excellent teachers that will prove to be the silver bullet.  There is no evidence, for example, that longer preservice training, teacher residencies that place new teachers as apprentices for a year before they assume full-time teaching positions, or incentives for teachers to stay in the classroom longer produce significant impacts.”

In other words, nobody knows how to train teachers better than TFA has, so let’s focus our attention on the bigger picture, and this is the main point of the book, getting transformational leaders.  This is the new focus of TFA.  It’s a machine that takes in five thousand of the best and brightest, doesn’t really know how to improve training, and then turns them into to a few dozen leaders, maybe even a President of the United States one day.

Let me finish with some constructive ideas about improving training:

1) The ‘Teaching as Leadership’ framework has a lot of flaws.  I think the biggest is how much it oversimplifies things and also how it doesn’t properly prioritize the skills that a new teacher needs to be successful.  For example, the very first tenet of Teaching As Leadership is that all effective teachers ‘Set Big Goals.’  Meanwhile, I’d say that this is horrible advice.  Most teachers who quit also had set big goals.  Setting Big (and unrealistic) Goals is not the self-fulfilling prophecy they say.  Read my big critique of TAL to get this in more detail — I did it much better there.

2) I’m not going to stop harping about TFA’s refusal to give CMs a proper student teaching experience.  Four CMs sharing a class that sometimes has only 10 kids in it?  That’s just negligent.  Yet, they’ve been doing this since 1994 and I don’t get any sense that this will change.  Obviously it would be very expensive to do it right, but it is worth it.  Maybe a smaller, but better trained corps would result in more effective teachers, and even more future leaders.  But that conflicts with the top priority of growth.

37 Responses

  1. I’m sure there are many teachfor.us users who know exactly how Corps Members “point to evidence” of significant gains since they are for the most part, you know, current corps members. I can tell you that in 2006 – 2007, my last year as a CM–when they were just rolling out this method of measuring growth–it involved tracking the kids heavily, from pretest to post-test and all throughout the year, and then providing that evidence to TFA. Reading, for example, was assessed at the beginning and the end of the year by a nationally recognized reading assessment like the DRA or the San Diego quick.

    Having read a few posts ago that you are “one of the best Math teachers in the country” and in this one that you are “one of the most successful teachers in the history of TFA” I am very curious as to how you come by those labels. Did you win some sort of contest?

    • Gary Rubinstein

      @Mrs B: One thing about this blog that you should understand is that some of my statements are researched facts and others of them are merely my opinions. I hope that the distinguishing reader would be able to tell which is which. I suppose I could have written, “In my opinion, I am one of the most …” and that might have been more clear then.
      I have won several ‘contests’ which I’ll mention in a bit, but as it is difficult to accurately measure and compare teachers, I don’t put that much stake in the contests. Others have won the same contests (like ‘teacher of the year’) who may not be that successful, so I don’t want to pretend that they are so important. But if they are important to you, I’ll go over them at the end.
      What makes me a great teacher, first of all, is that I’ve been working on improving for 20 years. I was pretty good my 4th year (and won the teacher of the year contest at my school that year), but I’m much more wise now. Aside from just teaching for thirteen of the past 20 years, I’ve probably spent more time than anyone (in my opinion) thinking about teaching and what makes a teacher successful.
      You may or may not think of this as winning a contest, but I had to compete to get a job at Stuyvesant High School, which is known to be one of the top schools in the country. I don’t know how many applicants there were for this job, but I do know that I was hired within five minutes of my interview, so I must at least be able to make a good impression.
      At my school, within three years, I was one of the leaders in my department who are also great teachers. We work together and I ask them for help but I also get asked for help a lot, which I don’t think people would do if I weren’t somewhat knowledgeable.
      I do have weaknesses, though, to go with my strengths as a teacher. I think my weakness is that I can be too ‘teacher centered.’ I do try to use group work effectively, but I don’t think I’m a master of doing it perfectly. There are many teachers at my school who do that much better than me, so I do not think I’m perfect.
      You are welcome to watch my youtube videos at http://www.youtube.com/nymathteacher where I try, like Khan Academy, to explain topics related to the history of math. These are actually aimed at upper high school or even undergraduates, but it can give you a feel for my style. I know that ‘lecturing’ is only one component of great teaching, but it is a big component so this will give you a chance to look at my work.
      As far as ‘contests,’ I guess we could say that ‘teacher of the year’ at my school in 1995 was one. Then, getting my first book published in 1999 was somewhat like winning a contest. In 2006 I was awarded a four year fellowship from Math for America, and then in 2010 I was awarded another one. I did have to ‘compete’ against some of the best math teachers in New York City to get these. Then I had to know what I was talking about somewhat to get my second book published in 2010. I don’t know if this is exactly what you mean by ‘contest.’ If there ever is a national contest, especially for math teachers, please let me know. If I don’t win it, I believe I will at least be a contender.
      And as another measure of my abilities as a teacher, there is the blog that you are now reading. In a sense what makes this blog successful is the same thing that makes me a successful teacher. When I have a passion for something, it comes through and I have a way of making it interesting to other people. In the classroom, I might be excited about solving quadratic equations by completing the square. Here on my blog, I might be passionate about one of my TFA conspiracy theories. The fact that you, along with tens of thousands of others, come to read it each year, I think, ‘points to evidence’ that I am a successful teacher.
      Note, I didn’t say I was THE most successful teachers in the history of TFA, merely ‘one of the most.’ I agree it is hard to prove, but it’s what I think, so I have the right to say it, and you have the right to disagree with me.
      If you’re trying to make the point that I’m a hypocrite that I can make unproved claims about my success while TFA is not allowed to I’d say that my claims of success don’t directly or indirectly hurt anyone. (Except maybe some feelings, though I didn’t claim that anyone else is not ‘one of the most successful teacher.) But TFA’s unproved claims of success do hurt many people. It hurts the students who have to get taught by first year CMs who go through a flawed training created by an organization that must believe it’s own PR. It hurts teachers around the country who get fired to make room for new TFA recruits based on policies passed, in part, because of these claims.

  2. ZR

    Here are my thoughts on some of your points based on what I know about TFA:

    You say: “So the CMs don’t actually have to ‘prove’ that they achieved these significant gains? They just have to “point to evidence.””

    I think: This wording was probably chosen to reflect the sentiment that the information teachers currently can access is hard to call “proof” since different placements and states have tests of different quality and reliability levels. For some teachers, it probably IS closer to proof, whereas for others they probably are using the best information they can find as a proxy measurement for the actual full range of growth. I do know for sure that TFA is constantly working internally and with external organizations to determine the best assessments and other measures that will get everyone closer to real proof, including true “value add” measures that show growth in comparison to similar teachers in and outside TFA.

    You say: “I wanted to develop confidence and problem solving abilities and also to show kids that math is fun and beautiful. Doing all that might not get them a year and a half of gains, but to me it was much more important.”

    I think: What use is confidence and believing that math is beautiful without the ability to actually solve math problems? I would not want my children (speaking of my biological children and the children I taught) to be confident without also having the academic abilities, which is what the growth measures get at.

    You say: ““our teachers are still not, on average, changing the trajectory of their students in a truly meaningful way. With a lot of hard work, we are getting better, but we are not where we need to be: The bell curve of effectiveness within our corps is still too wide.”

    This, to me, seems to contradict the stats that 39% of first year 2009 CMs and 50% of second year 2009 CMs have gotten a year and a half in one year.”

    I think: Actually, I think she’s trying to get at the previous quote of yours I responded to about confidence, etc. When she’s talking about “changing the trajectory… in a truly meaningful way” I think she means that TFA teachers need to be BOTH achieving significant academic growth with students AND building their confidence, love of learning, knowledge about opportunities for further education, etc. So no, they don’t contradict, in my opinion. The stats have to do with the measure around academics, the statement has to do with a wider vision that TFA is also trying to figure out how to quantify and measure success towards. And before any indignation about measuring things like confidence comes up, I think TFA knows that any metrics will only be proxies and won’t show the whole picture. Doesn’t mean they can’t start to try to see who’s doing a good job having the full impact.

  3. WHY DOES NO ONE ELSE EVER QUESTION TFA’S METHOD OF EVALUATING EFFECTIVENESS?! Thank you for bringing it up. This drives me nuts.

    It’s easier to measure yearly growth in English, so maybe all of their data comes from there. But TFA is often used for “hard to fill” math and science spots, and I’m a math teacher, so I’d really like for them to pay attention to that, too. Unfortunately, I was informed by TFA that “We don’t really measure yearly growth in math, just in ELA.” Does that mean those stats about “significant gains” are only English teachers? Or am I missing information? (I’m hoping yes to the latter, but I need someone to tell me what it is.)

    The TFA person I’m quoting went on to say, “For math, it’s better to use percentages and compare them to what’s typical.” This is where me “pointing to evidence” instead of actually proving growth becomes important. The only data TFA actually has on my kids’ growth comes directly from me. They don’t use standardized test scores or anything objective, they just take my tracker at the end of the year. I could inflate numbers or delete items my kids struggled with. I could give tons of partial credit for little things. I could even have the world’s most honest intentions but still write really easy tests that aren’t truly at grade-level. I COULD MAKE UP ALL MY DATA.

    Then the PDs somehow take this and “decide” who falls into which category of significant gains, solid gains, etc. There’s pressure on them to have high percentages making significant gains, too.

    I don’t change my data (but some of that is just out of spite for this intense pressure to hit 80%), and hopefully my PD doesn’t inflate it either. But even if all the data is always perfectly accurate, doesn’t the subjectivity make this a ridiculous system? I would never let my kids write their own tests, take them alone, and self-report their grades… and if I did do that, I would NEVER use the data to prove that I’m a good teacher. So how does TFA get away with doing effectively the same thing?

    • ZR

      I think it has to do with the availability of math assessments that allow for growth measures. In regions that use NWEA, my understanding is that there can be growth goals for math. Because of this and what Alohagirl says below, TFA is moving towards using a different measurements system (as opposed to the sig/ solid “bucket system”) that DOES use growth. But that requires rigorous, reliable assessments with data that is available on a much more specific level than what most states/ regions/ schools have. So it’s a work in progress.

      • Gary Rubinstein

        But the real question is whether the norm referenced cross correlated z-scores have been normalized or not. If a double-blind twice removed control group was tested independently, then the scores could become re-normalized and then un-normalized and then re-re-normalized to get a new z-score which could then be correlated with the old normalized data.

        • ZR

          Sorry, I’m not a statistics person and I have no idea what you’re asking/ suggesting here to be able to respond.

          • Gary Rubinstein

            I was just joking around — just statistical mumbo-jumbo since I didn’t understand most of the terms you used in your comment.

        • ZR

          Ah, well the point was that there aren’t many (any?) strong math assessments that show growth. NWEA is one assessment that can be used to measure growth in math.

    • Gary Rubinstein

      Am I understanding this correctly? The 40% of CMs get significant gains stat is based on a small percent of CMs who self-report and, in some cases, self-produce the pre and post tests? I think this might be the biggest fraud in TFA I’ve ever heard in 20 years of TFA frauds. Any other CMs care to comment?

      • ZR

        No. This is another good question to ask someone more in the know, but from my understanding:

        a. most CMs do report data
        b. the assessment(s) used in that reporting depend on the placement and region
        c. it is part of regional staff’s jobs to determine assessments that are acceptable for this, which includes closely evaluating teacher-made tests (and teacher-made tests are usually only recorded for these purposes if nothing rigorous/ aligned is otherwise available)
        d. nationally, TFA is working to find/ provide more non teacher-created assessments that are known to be rigorous and to provide good data. Some of that work is internal, some of it involves partnering with organizations who are exploring similar topics.

        • ZR

          To clarify (because I can anticipate one point you might jump on) – there are multiple levels of staff members who check decisions about which assessments to use. They are looking for rigorous, reliable assessments.

          • All CMs are required to report data. Whether it’s complete or accurate or not, I imagine they get most people to turn something in.

            In Phoenix, the assessments used are COMPLETELY up to each corps member. I just turned in my tracker for the year, which consisted pretty much entirely of things I wrote myself and no one in TFA ever ever ever saw. I don’t know anything about regional staff “closely evaluating teacher-made tests”.

    • Gary Rubinstein

      Thanks mathinaz for filling me in on this info. It puts those numbers into a lot better perspective. Any stat which shows that 1st year 2009 CMs are as good as 2nd year 2008 CMs is completely invalid. Feel free to get your legion of fans to chime in. I’d appreciate it.

    • Wess

      Just FYI, the data used to calculate effectiveness for me this year was my TAKS data, compared to the gap between a “baseline” of how my school did last year and a “pacesetter” of how non-free-and-reduced-meals students averaged across Texas.

  4. Ha sorry, that was way longer of a rant than I intended.

  5. Alohagirl

    I have to agree with mathinaz here on the subjectivity of the “gains” – and I’m an ELA teacher. I gave my kids the DRA at the beginning of the year (well, November, so really kind of one-third into the year) and the end of the year, but that was my choice. I wasn’t required by TFA to use the DRA (altho they supplied the kits when I wanted them).

    The thing is, I personally administered the DRA both times, so I could be subjective in my results if I wanted to be (I’m not, but from what I’ve seen, many of the over-achieving TFAers might find this tempting with all the pressure we are under to “make gains”). And the DRA was my choice. For the tracker, we were told to use “whatever” we thought would chart student progress. Some teachers used their own tests, which, as was pointed out above, could be very easily manipulated – and even if they weren’t, what about the underlying quality of those tests? How do we know they were measuring anything?

    Also, this leads to apples to oranges to pears comparisons. If TFA teachers all over my region (and the nation?) are using vastly different measurement tools, how can we put that data together in any way that could be reported as a single percentage representing “growth”?

    • ZR

      That last point is one of the arguments behind national standards and better national assessments.

      • Alohagirl

        aha, and then here’s another barrier! the problem with national standardized tests is that for kids who aren’t on grade level (like my kids, who are ELL, or for SPED kids) the standardized tests can’t show growth. I have kids who are on a K-4th grade reading level in 7th & 8th grade (and I don’t think that’s a situation limited to SPED and ELL, either). So even if a kid starts at K reading level in the beginning of the year and makes it to a 2nd grade level at the end of the year, results on the standardized tests are still dismal (and heartbreaking). And they don’t reveal growth.

        Let me say, I think the DRA is a great tool for avoiding that problem, and probably about as good as you’re going to get for measuring individual growth in this subject. I just wanted to point out that not every ELA TFA teacher is using it, so it’s hard to compare. Also, we use it with our HS ELLs, even tho it only goes to an 8th grade reading level (yeah, it’s that bad). So how do you test ELA growth at the high school level? Most reading tests I’ve seen are geared toward elementary (I may be wrong on this, perhaps I simply was not exposed to any).

        It’s not an easy issue to solve. But with the knowledge I have of how tracking is managed (in response to ZR’s comments above about regional staff reviewing assessments, that has NEVER happened to the CMs at my school) I wouldn’t trust the numbers that TFA puts out about growth. I worked in research science before finding my way into education via TFA, and as far as usable data, it’s a joke. But of course, it’s not meant to be usable data – it’s for PR, which is a whole different animal.

  6. I’m in agreement with mathinaz and Alohagirl. I’m an ELA teacher and I just find it wild that CMs can put together tests of any rigor and then make definitive statements about student mastery or growth based on them. Yes, I strive to quantify student progress, but I don’t ever believe that I can draw solid conclusions from, say, a student “mastering” 4 out of 5 multiple choice questions for a standard on one test.

    So, this year, I chose to give the NYC Regents comprehensive English exam as my summative assessment instead of one that I had made on my own. The idea being, I’m going to give something that is recognized as “legitimate” and which also offers a score conversion chart to see who where students stacked up. Just to be clear, I believe in using data to drive instruction, but I’ve become increasingly frustrated with the way TFA talks about mastery and growth numbers.

    Another major issue is that there are many things worth measuring that are incredibly hard to quantify, such as changes in mindset and attitude, both of which can be even more powerful than say 1.5 years of reading growth, especially if those changes lead students to work even harder to become smarter. I tried my best at finding a way to quantify these changes. I use a Likert scale literacy survey given at the beginning and end of each semester to see the changes (http://www.slideshare.net/ablogcoveringdceducation/literacy-survey). My mastery and growth “numbers” this year do not put me in the “significant gains” bucket (I readily admit this), but I have plenty of anecdotal evidence showing that my students’ mindsets have changed and that students have learned (this student rap is a good example of how my messages have gotten through: http://abcde.teachforus.org/2011/05/23/a-student-raps-about-his-teacher/). From TFA’s perspective, this does not count as much as the numbers.

    • Alohagirl

      Hey thanks for the link to the literacy survey – I am going to modify and use this!

    • Ms. H

      Your second link isn’t working. It takes you to a page that says there is nothing found.

      • This post has so many comments. I wish there were a way to receive notification with each new one. Just remove the “)” from the end of the second link and it’ll work out fine.

  7. MD

    i definitely agree with the subjectivity of gains. for us, it’s based on the state test (how many kids made basic) but this is based on a pre- and post-test that are exactly the same and given multiple times throughout the year. it’s more a measure in effective memorization than in teaching. additionally, the assessments used to test for 80% mastery are flawed and often times way below grade level. it would be extremely easy to fabricate these results.

  8. debryc

    I’m a state tested grade level and subject area. For me, significant gains is determined as follows:

    1. Take the average scaled score on the state test by non-disadvantaged students
    2. Subtract the average scaled score of the students at my school from the previous year (so if I’m teaching fifth grade, this would be the score of last year’s fifth graders, or this year’s sixth graders)
    3. This difference in scores is the quantifiable achievement gap before I became a teacher at the school.
    4. Next, take the average scaled score of my students
    5. Subtract the average scaled score of the students at my school from the previous year
    6. This difference is the gains I’ve made for my class compared to last year’s class.
    7. Compare the difference in Step 6 with Step 3. If my gains close 20% or more of the achievement gap (Step 3), then I’ve made significant gains.

    This procedure is consistent for all tested grades and subjects across the Houston corps.

    • parus

      My huge problem with this method is that it doesn’t tell you how much progress YOUR STUDENTS actually made, just how they compare to last year’s students.

      • parus

        To elaborate: I’m not trying to impugn you in any way, just commenting on this method of evaluation…but if you’re at a charter school, this would be so easy to game. Since it’s not comparing the students’ current scores to their own previous scores, but comparing them to another cohort, all the school would have to do to create “significant gains” would be to “counsel out” some of the lower-scoring students from the current cohort. And even in a normal public school where kids (theoretically) can’t get the boot, the data still wouldn’t speak directly to the current teacher – you might get one group of kids who, say, had a really terrible teacher a previous year, and therefore came in far behind, or a group who had the opposite and came in far ahead and therefore did fine on the state exam even if you did a completely crummy job all year.

        Even assuming the state exam is statistically valid and reliable (which is a stretch) and that students’ results on it are a good way to measure a teacher’s performance, I can’t see how this particular calculation gets at teacher performance at all.

        • debryc

          I agree that this isn’t student level data. I’ve looked at my own class’s growth via benchmark tests (released state tests) but TFA doesn’t have this data. I think the challenge is, TFA doesn’t know what the “pacesetter” scores are for students at the beginning of the year because these aren’t reported out to the state. Ideally (if we’re using state tests), the steps would be:

          1. Take the average scaled score on a released state test by non-disadvantaged students at the beginning of the year.
          2. Subtract the average scaled score of the students at my school on the same released state test at the beginning of the year.
          3. This difference in scores is the quantifiable achievement gap before I became a teacher at the school.
          4. Next, take the average scaled score on the end of the year state test by non-disadvantaged students.
          5. Subtract the average scaled score on the end of the year state test by my students.
          6. This difference is the quantifiable achievement gap that exists at the end of the year.
          7. Compare the difference in Step 6 with Step 3. If my gains close 20% or more of the achievement gap (Step 3), then I’ve made significant gains.

  9. TFA’s metrics for student achievement growth are bad to the point that I think it’s intentional. When I was a CM, these were entirely self-reported. I don’t think TFA Corps Members are known for their humility and modesty.

    I understand they are now using TFA-created assessments, which is how Corps Members teaching second grade, for instance, report two years of gains. (I am using an actual local example.)

    I have problems with this. If the children have shown two years of gains, then this should be documented in their state tests. Particularly, a significant portion of these students should be scoring at an Advanced level. But they aren’t; the majority of them are scoring at Basic. So if those kids had two years of growth in that first grade year, they had to have started with below Kindergarten level skills.

    This is a simplification, obviously, but the problem remains. The kids aren’t making two years of documented growth. The attitude that they are suggests that the rest of the teaching those children receive is low-quality, and it leads to bad feelings at the school site.

    • Ms G

      I just finished my 2 years and I think it’s still common to just use whatever you can for assessment – make it up, etc.

      • I think this is a huge problem when the CMs get out of the classroom, too. I spent one year working as a literacy coach. When you’re observing teachers, it’s easy to see how a lesson can be better structured, a simple management solution, etc. Hence you begin to see yourself as a remarkably outstanding educator. It’s too easy to forget how hard the work is when you’re not doing it.

        These inflated results exacerbate the problem. Once you’re out of the classroom a couple of years, resting on seventeen years of growth in nine months or whatever, it’s not hard to see bad teaching everywhere and attribute it to lazy, low-quality educators. Eventually, you’d have Rhee-level hubris with as little to show for it.

  10. H

    When I was a CM, pre and post assessments were self-created, self-analyzed, and self-reported. It was not difficult to show 1.5 or more years of growth. I don’t know how TFA analyzes and reports everyone’s self-reported results based on self-created assessments. I believe that some CM’s or regions now use a TFA created assessment? For those that don’t, I just don’t understand using self-created and unvalidated assessments to measure growth. Aside from that, there are no standard testing procedures used for self-created assessments, which is also a concern.

  11. Cal

    I work in a district with a lot of TFAers. Two of them taught algebra with me this year. None of the comments below are intended to be derogatory, just descriptive.

    One of them was in her second year. She had, by her own constant descriptions, a terrible first year. Her second year was better, but (again by her own admission) not great. She then resigned from consideration for a third year because she didn’t want to teach anymore. But here’s the hilarious part–she’s interviewing for TFA jobs! That’s what she wants to do. She’s totally into TFA, loves it, even though she had a miserable time teaching. And since she’s under serious consideration for a number of TFA jobs, this is apparently not unusual.

    So something I would consider completely bizarre–a TFAer who hated teaching but whose next career goal is TFA staffing–is pretty much par for the course in TFA. I also wonder to what degree TFAers enter the program because they see themselves as a longterm TFA employee–but not teacher.

    The other TFAer made it through her first year pretty well, and in general, I felt we had much more in common as teachers. She isn’t (again in my view) as much of a TFA true believer (Koolaid drinker, less charitably) as the first. She has no intention of staying in TFA, although she likes teaching–she misses her field of academic study.

    In one conversation, they were both talking about TFA’s expectation that 80% of their kids show mastery, and how they made it, but “just barely”.

    “What are you talking about? You get 80% mastery on your tests?”

    “Most of them, yes,” they replied.

    Now, I teach the same demographic they do, and there’s absolutely no way their kids could realistically hit 80% mastery on their tests. And both of them failed more kids than I did (particularly Hispanics), so they weren’t giving all their kids As and Bs. We did a coursealike halfway through the year, and my kids outscored both of theirs by a small amount; similarly, we all had the same results on benchmarks–and, given that I had a class of algebra intervention kids, my results should have been worse.

    So other metrics made it clear that they weren’t hitting 80%. But they continually talked about that number and how happy they were to be making it. Neither was dishonest; they were clearly giving their kids some sort of assessment that was returning 80% success. But that high level of mastery wasn’t showing up on any other metric.

  12. K

    The 1.5 years thing has always struck me as odd. It kind of presupposes that there is this magical “grade level ability” in all subject areas that can be tracked. In reading, sort of. I’ve taught 9 years now, and am in grad school studying literacy, and so I’m pretty familiar with a lot of tests of reading ability. They are very useful to track student growth, to figure out what to be teaching different kids/groups of kids, etc. But realistically, can we REALLY say that at age 8 and 2 months, the “right” way for a kid to read is such and such? And that by 8 years, 3 months, that kid is going to inevitably have grown in exactly this amount? Let me be clear – I think it’s great that the tests we use can show small degrees of change, because kids are motivated by degrees of change. But while the change is real, the number attached is just a label used to describe the change.

    It gets even more complicated outside of straight reading ability (which in and of itself is complicated by the context of the reading passages, which invariably involve some content that either is or isn’t familiar to kids before they read) – writing and math growth are way harder to quantify. And how about science and history? How do you claim that a history student grew 1.5 years? If all the material taught that year is new to them, and they learned it all, how do you “get ahead” in history? And why do you need to? I’m all for teaching history fully and thoughtfully, believe me. But to say “Oh, Johnny, you’re only on an 8th grade level in history” doesn’t make any sense. It seems self-explanatory that history doesn’t progress by grade level, it progresses as you study different topics, which could arise in a variety of grade levels (I’m not saying that the study of history doesn’t get more complicated as you get more advanced, just that it’s not a “grade level” thing).

  13. Michelle

    I think everyone would acknowledge that this is a flawed system, but at least TFA is trying to look at student gains and was doing so long before it became part of the national conversation about teacher effectiveness.

    So, yes, it all has to be taken with a grain of salt, but it’s a start. And it is work that Schools of Ed and other teacher training programs don’t want to even try.

    Gary, given the limitations of assessment systems, what concrete ideas do you have for collecting student growth data from corps members across various regions? You are never shy about telling TFA how to get better – so how would you do this data-collection better?

    • Gary Rubinstein

      I think all you really need to look at is the quit rate. For every person who quits, there is at least one or two more who probably aren’t doing anyone any good. If the quit rate is not decreasing then the effectiveness of the CMs isn’t either. Get that quit rate down to about 5%, rather than 11%, and that’s all the proof I’d need for effectiveness. Using ‘Value-Added’ models is a very dangerous thing to do. In the wrong hands it causes a lot of hard working teachers to lose their jobs and a lot of schools to be shut down.

  14. Ms G

    “Quantity over quality” is a real, real problem with Teach for America. As a teacher just finishing my two years with tfa and now looking for another teaching job, expansion is completely inappropriate right now, given the huge cuts happening across the country. Worse, yet, is the fact that Teach for America will knowingly support institutions doing terrible damage, rather than pulling out, because they are so attached to their expansion plan.

About this Blog

By a somewhat frustrated 1991 alum

Region
Houston
Grade
High School
Subject
Math

Subscribe to this blog (feed)


Subscribe via RSS

”subscribe

Reluctant Disciplinarian on Amazon

Beyond Survival On Amazon

RSS Feed

Subscribe