Times hits raw nerve with data on teachers


The Los Angeles Times has created a firestorm – and prompted a call for a boycott of the paper by the head of the teachers union – for evaluating the performance, based on standardized tests results, of 6,000 third- through fifth-grade teachers. The Times plans to publish the effectiveness ratings of individual teacher later this month, after giving them until Thursday to comment on their individual results.

Not surprisingly, the story in Sunday’s edition has generated a rush of comments, with Secretary of Education Arne Duncan, writer Diana Ravitch and some nationally prominent superintendents also weighing in on the Times article.

Duncan told the Times he endorsed the public release of the data, because parents have a right to know if their children’s teachers are effective. “What’s there to hide?” he asked. California Education Secretary Bonnie Reiss said the state would encourage districts to develop and release similar information. But Ravitch, who opposes using standardized test scores as a basis for firing teachers, called the article “disgraceful,” and California Teachers Association President David Sanchez said publishing the database of teachers’ effectiveness would be “irresponsible.”

Readers’ reactions were split. Some called for an end to tenure; others said the database would further erode the morale of teachers; one reader called for the Times to publish reporters’ reviews – so they can feel what it’s like to be under an endoscope.

The research raised two distinct questions: Should teachers be measured by the results of their students’ STAR scores in English language arts and math, and should the evaluations of those scores be published, with teachers’ names?

The former seems obvious: of course, if the limitations of the data are made clear and if the results are placed in the context of a teacher’s overall performance. The second question is tougher – and  troubling – even if, as in the Times’ case, the methodology is sound. The public will inevitably inflate the importance of the ratings – even if the district and principals do not. Just as they push to get into schools with the highest API scores, ignoring other factors, parents will clamor for teachers that moved students the greatest tenth of a percentile.  Many teachers will  respond by fixating on STAR tests and the preparation for them even more than they do now. The numbers could become a straight jacket for their teaching, a distraction and source of tension for an increasingly joyless profession. For many teachers, the surprise publication of their scores will be an embarrassment.

At the same time, results of test scores should be an integral component of a teacher’s internal performance review. This is especially true at the margins – for the 10 percent of teachers whose students  consistently regress and for the 10 percent whose students consistently outperform. The former need to improve, or, failing that, to exit the profession, while the latter should be recognized.

Value-added analysis

Methodology is critical. The Times used a technique known as a value-added analysis, which is favored by the Obama administration and many education reformers. It’s more rigorous than simply comparing raw test scores. As the Times explained, the value-added approach  “rates teachers based on their students’ progress on standardized tests from year to year. Each student’s performance is compared with his or her own in past years, which largely controls for outside influences often blamed for academic failure: poverty, prior learning and other factors.”

A student who entered the year at the 40th percentile in English language arts or math would be expected to be at that level at year-end. If she falls to the 30th percentile or rises to 50th, the difference is credited to the teacher’s impact, though other influences could be at work, too.

The Times’ research was a huge undertaking. The newspaper hired a RAND Corporation researcher and economist, Richard Buddin, who did the type of data analysis that Superintendent Ramon Cortines acknowledged that the district should be doing — and will. “I think it’s the next step. It has to be done.”

Value-added analysis has its critics, who warn against using it in a high-stakes context, as the sole basis for rewarding or firing. It won’t account for the impact of larger class sizes, the bad-luck draw of disruptive students,  parental divorce, a stomach ache on testing day. But the Times looked at seven years of student results – a length of time that can minimize the effect of data noise and yearly fluctuations.  As the Times article fairly noted, “value-added analysis offers the closest thing available to an objective assessment of teachers.”

The Times’ macro findings were interesting and, to an extent, unexpected.

  • The most effective teachers – those who raised students’ scores up to 25 percentage points in math and 17 percentage points in English — were scattered throughout the district, not concentrated in affluent neighborhoods. And the worst weren’t massed in poor neighborhoods. (Measuring value added actually presents special challenges in wealthy schools, because students may enter second grade, when tests are first given, already highly proficient, with little room for added improvement.)
  • The teacher a student gets has a far greater impact than the school she attends. Some students in the study were assigned least effective teachers multiple years in a row, which can leave children falling farther behind their peers.

Credible advocates of value-added analysis don’t argue that test scores should be the sole or even the primary way to measure a teacher. But STAR program does measure the standards that teachers should be teaching, and there will be no retreating from that state commitment, regardless of what happens on a federal level to the No Child Left Behind law. The issue shouldn’t be whether value-added analysis should be done but how the data would be used.

Leaders of United Teachers Los Angeles, always 20 steps behind and 20 decibels too loud, cling to the position that STAR test results shouldn’t be part of a teacher’s performance review. As long as they do, parents and school critics will seize on the results as evidence that teachers are making excuses  and covering for the weakest colleagues. They should be welcoming sophisticated analyses of student test results as tools for their improvement and as proof of their accomplishments.

Parents consider the effectiveness ratings important; teachers should assure parents that they do, too – and prove it by incorporating them into performance reviews, with consequences and rewards.

Publishing the database may finally force the UTLA to confront an issue that it has reflexively resisted.


  1. “This is especially true at the margins – for the 10 percent of teachers whose students consistently regress and for the 10 percent whose students consistently outperform.” Not to be nit-picky but I think you meant ‘at the extremes’, not ‘at the margins’. Value-added actually does a pretty poor job of differentiating at the margins – it’s only teachers who are in the tails of the distribution that we can really have any confidence that we are accurately capturing effectiveness.

    Regardless of how one feels about value-added, as a researcher, I’ve been shocked at the public disclosure of teachers’ names. Most researchers have to sign their lives away in confidentiality agreements if they want to use student-level data with individual identifiers. How in the world did the Times get their hands on this data without such an agreement?

    Report this comment for abusive language, hate speech and profanity

  2. Will data be collected, analyzed and published about the effectiveness of parents as it relates to student achievement? I believe that factors at home have a huge impact on student learning and achievement. When parents fail to feed, clothe, house, provide medical treatment, and support educators through homework and experiential learning opportunities, etc. will they be penalized by increased taxes to pay to provide schools with the additional funds to intervene in their child’s behalf?

    Report this comment for abusive language, hate speech and profanity

  3. Two questions: How did the Times and its researchers get their hands on supposedly confidential individual student identifiers? and when did the California standards become linked from grade to grade so that such an analysis would be valid? As a parent, I do not want my children’s scores made available to anyone except school authorities and me.

    Report this comment for abusive language, hate speech and profanity

  4. I find this very interesting, esp. the macro results. Seven years is enough time to weed out some noise, but seven years that represents the first seven years of a teacher’s career would likely produce some different, at least milder, results than seven years in the middle of another teacher’s career, even if both are excellent in that seventh and last year of data collected.

    Could this data be used to compare teachers who started out after a traditional credential program vs. TFA with the same number of years teaching experience? Could this give us a place to start in comparing the two?

    Some teachers have suffered with reviews/evaluations that may be as much political with the school or district as practical in evaluating performance, and many teachers report frustration in getting feedback to improve their teaching from administrators, possibly because some administrators have no experience teaching to offer any.

    In any other workplace, when you receive at an evaluation feedback on what you should improve on, it’s private, and you get a chance to improve it before any performances issues are known to others in your workplace, unless they already know themselves from direct contact. This is not coming from inside the organization, this is outside… this is the kind of thing that media with $$ can pay for, but it would be tough to justify given draconian budget cuts for the district to be ponying up the money for. Shouldn’t teachers get a chance to WORK with the data before being called out on the carpet?

    I’m sure the data is really valuable and could start some very constructive dialogue within schools and within the district if they got a chance to work with it, but it’s still got flaws, some probably haven’t been considered yet, like measuring the teacher who tends to get students from homeless shelters, which means year after year those students are tested after being shuttled from school to school as their lives are continually uprooted and have been in his classroom for less than 90 days prior to the test. Because they’ve been in school anywhere that academic year, they count against him.

    But when it’s released in a month, teacher’s lives are going to be more stressful, even the ones whose data will look better than their peers. And parents and neighborhoods will demand that other teachers whose data doesn’t look so good be fired or at least get their kid out of that teacher’s class TOMORROW and really, I can’t imagine considering all else going on in schools today being able to go to work every day like that. It takes the constant criticism of teachers that has been in the media since before I was born to a whole new level, the level where individual teachers become the target of media campaigns, and I wouldn’t be surprised if a teacher’s car doesn’t get egged in the parking lot, etc. too, over it.

    Report this comment for abusive language, hate speech and profanity

  5. As a parent who’s about to meet my kid’s third-grade teacher this week, I wonder what educators think about the LATimes story’s description of teaching technique. Twice, a good teacher was contrasted with a poor one, and reporters sat in on classes and described what they saw as differences in technique. What I took from it as a lay reader was that a teacher’s engagement with students — asking questions, following up on students’ answers — made the difference, as well as expectations for students. But just because that was the visible difference doesn’t mean it was the meaningful one.

    Report this comment for abusive language, hate speech and profanity

  6. The most important factor for a student’s success is the teacher. Will anyone be drawn to the teaching profession given the intense criticism teachers are experiencing? We need the best and teh brightest, but will they come? And can we pay them? Far easier to use your college degree in another profession.

    Report this comment for abusive language, hate speech and profanity

  7. The LATimes series is indeed provocative not only in terms of the value-added analysis but also with the prospects of revealing individual teacher identities. I’ve followed the ebbs and flows of value-added analyses since the early 90′s, and have always been a tad skeptical that it can hold up to the technical demands for both longitudinal data [grade to grade comparability of scores as mentioned by others, as well as dealing with mobility of students which may be a larger concern] and high-stakes accountability usage. The linking of individual student scores over time is indeed a privacy issue which I’ll let the author and LATimes discuss, noting that one way to do it is via access to ID numbers with names and/or other identifying information stripped from the data files for these kind of analyses. Another option is to do value-added analyses on aggregate data only, no individual student score records involved — less precise analyses but fewer technical concerns with comparing scores from grade-to-grade and mobility issues. The LATimes stories of course do not go into these technical concerns and rely on folks to accept with blind faith the proposition that the analyses are conducted with high technical rigor. As a long time skeptic but professional friend of many who work on these kinds of policy analytic problems, I’d like to see things like the variability of estimates around individual teacher effectiveness ratings before accepting the analysis for any kind of high stakes decisions or widespread usage. Doug McRae, Retired Test Publisher, Monterey, CA

    Report this comment for abusive language, hate speech and profanity

  8. I feel that the anger of some at LA Times is misguided. Jennifer Imazeki is upset that teacher names were not kept confidential. Just as a reminder, it is the confidentiality of students names that is anchored in the law, not of teachers. Teachers are public employees. The public, as their employer, should be aware of their salaries like it should be for other public employees, and should have visibility to gross measures of their effectiveness. ———— Pamela asks if teachers shouldn’t be given a chance to work with the data before being called on the carpet. I think she is right. Except that teachers have had this data for their own school for years, and any teacher that did not compare his or her scores to those of his/her peers’ in the same school is not a professional. And if they did compare their results and then proceeded to ignore what they saw, … ————– But the biggest anger should be directed at LAUSD administration and LAUSD school board, not at LA Times. Cortines and his people had all this data for years. They claim they did not evaluate it in exactly the same way LA Times did, and promise to do it in the future. In my mind, they are simply lying and heads should roll. And without golden parachutes. Perhaps they did not evaluate the data *exactly* the same way that LA Times did. But they certainly evaluated this data seven different ways and knew everything that they now claim is new to them. They simply willfully chose to ignore it, hoping that nobody would put it in such stark way to the broad public. Now that it was, UTLA and LAUSD administration work hand in hand criticizing LA Times rather than submit mass resignations admitting gross incompetence and mismanagement. LA Times did California a great service.

    Report this comment for abusive language, hate speech and profanity

  9. Let’s see if the LA Times releases the analysis. Given the direct criticisms they have leveled it only seems the reasonable thing to do if they want to be taken seriously. Given the Times has the data it would sure be interesting to know what can be learned about teacher improvement. How many and what classes of teachers are improving their teaching?

    Report this comment for abusive language, hate speech and profanity

  10. Longitudinal student data has been available for years – at least it was when I served on a site council many years ago. We looked at individual students graphed on a chart (not identified in anyway to protect privacy) to measure the year to year progress of students and the effectiveness of programs. At the time, the data were not used to measure individual teachers in that setting, but it was certainly available to the principal and administration. It was my understanding that these measures were looked at during private teacher evaluations as a personnel matter. It was phenomenally helpful to look at the trajectory of students. It has always baffled me that NCLB did not use this to rank schools. Bottom-line, if schools were using the tools available to them to evaluate, support and mentor their teachers effectively all along, the drive to release their names and to hoist them from the petard of the LA Times would be moot.

    Report this comment for abusive language, hate speech and profanity

  11. In an ideal world, the school district would have done the calculation, informed the teachers how their students were progressing, and helped them improve. Given that the district apparently did not do that, I think the Times has done a public service.

    This study is consistent with most other research I have seen: that there is a huge difference in the effect teachers have on student performance and that most of the conventional measures of teacher quality (education, credentials, etc.) have little relationship to student learning.

    I would suggest that its members would be much better served the union concentrated on isolating the qualities shared by successful teachers and exploring ways that less successful teachers could get better results.

    Report this comment for abusive language, hate speech and profanity

  12. Buddin states: “I would clarify that the LA Times did not get information to student-level identities.”

    If that is true, then how did they link specific students to specific teachers for the purposes of the value-added analysis?

    Report this comment for abusive language, hate speech and profanity

  13. FERPA clearly protects individually identifiable data for students and I am sure that the student-level data used in this analysis had the students’ identifier numbers, not names or other identifiable information such as social security numbers. I am pretty sure that teacher data are not covered by FERPA and identifiable data can be released such as names. Getting this level of data for teachers really is the difference between trying to negotiate with a district or the state for data and making a public records request where the district/state holds less leverage if you know the law.

    Certainly the results could be released without teachers’ names with perhaps the number of “high performing” and “under performing” in each school in each grade to at least signal to parents that they need to pay attention to whom their child is assigned and ask deeper questions about the effectiveness of their child’s teacher. This could also aid the teacher who is seeking to become more effective and to the principals and district administrators charged with providing a quality education and making sure that the teachers in the classroom are effective in helping children reach their potential and desired outcomes.

    It is one measure and any evaluation should encompass multiple measures, but to say that this is not important ignores what we continue to learn about the effect of quality teachers on the lives of children such as the analysis from the Tennessee STAR study showing that effective Kindergarten teachers translate into greater employment, income, and life outcomes of the children who were in their classes.

    Report this comment for abusive language, hate speech and profanity

  14. The LA Times was able to get it through the FOIA, but individual teachers have not had access to other teacher’s data, certainly not over a period of years, nor is the programming involving the data something a teacher could easily do, as talented as they are, given the daily demands in their profession. In fact, if a teacher changes jobs, they may not even be able to see their own data when it’s finally released the next year.

    Report this comment for abusive language, hate speech and profanity

  15. My initial reaction to this story is this:

    I can’t help but think that if we had been having a more public discussion around collective bargaining proposals during prior negotiations in Los Angeles (as provided for in the Educational Employment Relations Act), we would have been much better prepared to be having this discussion – in fact, I’d suggest that we would now be much further down the road in determining what the characteristics and qualities of effective teaching and learning actually are.

    Report this comment for abusive language, hate speech and profanity

  16. This article is a disgraceful collapse of journalistic standards and ethics, an escalation of open warfare against teachers, public schools, school communities and the children in those schools.

    To begin with, the value-added methodology on which the analysis is based is adored by opponents of public education but discredited by most researchers.

    And that methodology — applied by a moonlighting RAND researcher freelancing for the LA Times — has been discredited by RAND itself.

    Here’s some reasoned discussion of that point, from Washington Post education reporter Valerie Strauss’ blog:


    Here’s a tweet from Diane Ravitch, former U.S. education official and author of the bestselling book “The Death and Life of the Great American School System”: ‘LATimes should publish names of heart surgeons and patient mortality rates, plus lawyers win/lost scores. Why stop with teachers?’

    As to reporters — entirely untrained in education — sitting in on classrooms, passing judgment on teachers’ classroom manner and printing their names — well, their observations may be valid in this case. But are you comfortable with the press doing this with your profession, perhaps with you? Is the press qualified to judge your professional skills? I hope this is a rhetorical question. In return, are they willing to be subjected to the same treatment?

    Report this comment for abusive language, hate speech and profanity

  17. While I think that the LA Times is going a little too far given the likely stability of value-added measures over time, I do think that there is a void to be filled to provide this type of information. In a blog post at the Quick and the Ed, I provide a quick summary of the research that shows that at the teacher level these measures lack stability both across time and across tests. I also propose a middle ground for all of those thinking about writing their own public records request to simulate this for other districts. Read more at:

    Report this comment for abusive language, hate speech and profanity

  18. John, you wrote, “But STAR program does measure the standards that teachers should be teaching.” As a high school English teacher, I have to disagree. Our standards fall into four areas: reading, writing, listening, and speaking. The tests do a debatable job of assessing SOME of the reading standards, and a even-harder-to-defend job of assessing some awareness of some of the writing standards. You can’t bubble write, so you can’t prove that a student who knows punctuation will use it consistently, for example. From where I sit, the tests only purport to measure a fraction of the state standards I teach. And how do they do that? With a tiny number of questions. These tests are designed mainly to give a snapshot of a school, district, or state. My son’s 3rd grade report came last week. The report describes the standards for Literary Response and Analysis this way: “Students read and respond to a wide variety of significant works of children’s literature. They distinguish between the structural features of the test and the literary terms or elements (e.g., theme, plot, setting, characters).” And how many multiple choice questions did they use to assess my son’s ability to do that? EIGHT. Eight questions to attempt to assess my son, or his teacher – to see what was covered in a year? On the math side of the report, they use verbs like “choose, use, describe, compare, show, conduct [experiments], predict.” IF you can assess these standards using multiple choice tests, how many questions would you want? For the standards I just quoted, they rely on 21 questions. I am increasingly frustrated by the unquestioning confidence in the value of these tests.

    Report this comment for abusive language, hate speech and profanity

  19. Two notes on the data and methods behind Richard Buddin’s report:

    The number of teachers in the sample who switch schools is not large enough to separately identify teacher effects and school effects. Therefore, each estimated teacher effect is actually composed of a teacher effect and a school effect. What this means is that it is possible to use the teacher effectiveness estimates to compare teachers within schools, but not across schools.

    Buddin does not report the standard errors of his teacher estimates, so we do not know how precise the estimates are. As Jennifer notes, value-added estimates provide at best only a very rough measure of teacher effectiveness.

    Report this comment for abusive language, hate speech and profanity

  20. Louis Freedberg on California Watch:

    The publication of a controversial, and groundbreaking, article by the Los Angeles Times raises complex questions about whether to “out” teachers whose students perform poorly on reading and math tests.

    That is especially when the “value-added” techniques used to identify them are themselves mostly untested and filled with hazards, even to statisticians who do this kind of thing for a living.

    The Times’ analysis holds the potential to fling open the door of any California classroom for public examination in a way that has never been attempted before – with completely uncertain outcomes.


    Report this comment for abusive language, hate speech and profanity

  21. Just wanted to respond to a couple other comments. It is not really accurate to say that value-added is “discredited” by most researchers – I don’t do value-added work myself but am familiar enough with the literature to feel safe saying that there is general agreement that a) there are steep data requirements to do such an analysis well, b) even when those data requirements are met, there are still limitations but we CAN learn something useful and c) even ivory tower academics would never suggest that value-added measures be the sole basis for teacher evaluations. I think it’s also important for people to know that LAUSD has actually been in the process of trying to develop a way to use value-added as part of teacher evaluation (as one of multiple measures) and they were going about it in a relatively thoughtful way – they asked solid researchers to help them create and evaluate the program, they were going to pilot a smaller program before rolling out district-wide and they were working hard to get teacher buy-in. It isn’t clear whether that process will now be more difficult because the union will dig in its heels, or if it will be easier because public outcry will demand a response from the District, but it does not seem that the LAT writers were aware of this effort. I, for one, would certainly prefer to see LAUSD take a slower, thoughtful approach to implementing this sort of evaluation system and it would be shame if it is derailed because of careless journalism.

    Report this comment for abusive language, hate speech and profanity

  22. I’ve been looking at my 7th grade CST math scores from last years students. At my school, we have had access to individual students scores for the last few years and are able to look at overall proficiency as well as student growth from year to year (assuming students are not new to our district). About half of my students moved up a band on the proficiency scale (for example, from below basic to basic), most of the rest stayed the same, and about 10% went down. I’m guessing from this my value added score would look pretty good. I also know our 6th grade math scores are lower than the 5th grade scores. Our 6th grade teachers are very aware of this and working hard to figure out why. The fact that for the past few years they have had a couple of awful teachers in their grade level hasn’t helped. Last year 1 didn’t get tenure, and the other was removed from the classroom and thankfully retired. That’s 2 out of 5 teachers in the 6th grade. My question is this – as a 7th grade teacher, wouldn’t my value added score be a beneficiary of some of this bad teaching in the 6th grade? Some of these kids who come to me supposedly below grade level based on their 6th grade scores were proficienct in 5th grade. I have no data to back this up, but my experience tells me it is easier to move this type of student back to proficiency than make the same type of growth with a kid who has always struggled with math.

    Report this comment for abusive language, hate speech and profanity

  23. I’m willing to bet that most of the teachers at the bottom of the Times’ data teach third grade. I’ve been told repeatedly by teachers, principals and district administrators that third grade test scores can be expected to fall off dramatically from second grade scores. So I feel especially bad for the 3rd grade teacher called out by name in the article, and I am surprised that no one mentioned it to the Times’ reporters at least ancedotally.

    Report this comment for abusive language, hate speech and profanity

  24. Re the issue about the third-grade scores — this could be a classic example of the “check it and lose it” attitude in journalism. If they admit that there are confounding issues and complexities, the Times reporters’ tidy and simple method of rating teachers is blown all to hell, and there goes their entire story and their entire project. I was totally unaware that my own colleagues had this attitude when I was working for daily newspapers, but I have seen it over and over in education issues since. Editorial boards are the worst (if anyone would like examples I can happily expound in detail).

    Report this comment for abusive language, hate speech and profanity

  25. Actually, I’m not quick to assume the worst. I’ve been closely following the education policy debate and the media coverage for more than 10 years now, so that’s not really very quick. Over that time, I’ve gradually (emphasize GRADUALLY, not quickly) come to recognize the patterns of lapses in the standards I thought my former profession was supposed to uphold. One of them has been a rush to oversimplify and an aggressive refusal to address (or even recognize the existence of) mitigating circumstances. With editorial boards, the attitude has even extended to discouraging questions, quoting canned corporate PR lines to stifle questions and disparaging those who do ask questions. As to the L.A. Times’ teacher evaluation project, the entire concept violates journalistic standards. How can anyone agree that reporters and editors — who are supposed to be reporting the news as fairly and impartially as possible, not MAKING the news — should be pronouncing just how to evaluate an entire profession and then carrying out that evaluation themselves, including naming names and public shaming? (No, the moonlighting RAND researcher didn’t do that — he crunched the numbers for them.) If I was inaccurate in claiming that the Times reporters ignored one point, I retract, but that’s rather a small thing considering their huge, egregious lapse. I’m just stunned that my former colleagues consider this appropriate journalistic behavior. I thought we just worked in different departments, but it’s more like on different planets.

    Report this comment for abusive language, hate speech and profanity

  26. John – how can you adjust for the drop off in scores from year to year when looking at the third grade scores when they only start taking these tests in second grade? There is only one previous year to compare with. Does anyone know of a link to a detailed but readable explanation of how the value added scores were calculated?

    Report this comment for abusive language, hate speech and profanity

    • Stephanie: You probably have seen Richard Buddin’s detailed explanation of his methodology, which is indecipherable to the layman. Simpler explanations can be found here. Buddin has been responsive to questions. I will inquire about the third-grade scores issue.

      Report this comment for abusive language, hate speech and profanity

      • I thank Richard Buddin for his thorough response:
        In early discussions, many teachers were concerned about a 3rd grade dip. They argued that students have a dip in proficiency between 2nd and 3rd grade. For example, students that were at the “proficient” level at the 2nd grade standard might fall to the “basic” level in 3rd grade (California measures proficiency on a 5-point scale ranging from “far below basic” to “advanced”). These proficiency rules are based on grade level standards, so the decline is relative to the California standard and not necessarily an absolute decline in reading or math ability itself. Our study did not investigate the dip itself, but perhaps the “dip” is really a reflection of 3rd grade standards that are set too high relative to standards in other elementary grades.

        In our analysis, we looked at a student’s progress relative to their grade/year cohort and not relative to California grade-specific standards. The value-added measure is tracking student-level progress relative to grade/year peers. In this context, we did not expect to see a dip, because we were comparing the progress than students made with some 3rd grade teachers relative to the progress with other 3rd grade teachers.

        We also investigated whether third grade teachers were at some inherent disadvantage by our value-added measure as many 3rd grade teachers had feared. The results in Table 5 of the technical report show that 4th and 5th grade teachers did not do better than 3rd grade teachers after controlling for other qualifications of those teachers. The only statistically significant is a very small negative effect of 5th grade relative to 3rd grade in ELA.

        Report this comment for abusive language, hate speech and profanity


  1. Evaluating teachers

"Darn, I wish I had read that over again before I hit send.” Don’t let this be your lament. To promote a civil dialogue, please be considerate, respectful and mindful of your tone. We encourage you to use your real name, but if you must use a nom de plume, stick with it. Anonymous postings will be removed.

2010 elections(16)
2012 election(16)
A to G Curriculum(27)
Achievement Gap(38)
Adequacy suit(19)
Adult education(1)
Advocacy organizations(20)
Blog info(5)
Career academies(20)
Character education(2)
Common Core standards(71)
Community Colleges(66)
Did You Know(16)
Disabilities education(3)
Dropout prevention(11)
© Thoughts on Public Education 2014 | Home | Terms of Use | Site Map | Contact Us