We are not experts on energy, defense, or the environment. Most of us at least, I think.
However, what we do know is healthcare. We are quick to recognize misrepresentations in the press, especially on hospital related subjects. Because we know distortions occur, for the very reason our erudition is lacking on the subjects above, we cannot judge matters on which we are not expert. Our passivity sometimes creates a submissive state, and often we succumb to motivated reasoning to reach conclusions on non-HM matters.
I do not know education well (add another to my list), but you would have to be living in a dumpster not to know of the teachers strike in Chicago. I wish to avoid snap judgments on the claims of either side, as I mentioned earlier, and thus, read today’s news with an unbiased view.
I reviewed the blog excerpt below this morning. It pertains to value-added models for teachers (VAM) and evaluations, essentially the equivalent of VBP for docs or HBVP for hospitals. We know those flaws. Nevertheless, substitute physician for teacher, and patient for student , and you are an instant expert. You might reach premature conclusions without your health background, but others do not have that luxury. You will see where I am going with this:
Using VAMs for individual teacher evaluation is based on the belief that measured achievement gains for a specific teacher’s students reflect that teacher’s “effectiveness.” This attribution, however, assumes that student learning is measured well by a given test, is influenced by the teacher alone, and is independent from the growth of classmates and other aspects of the classroom context. None of these assumptions is well supported by current evidence.
Most importantly, research reveals that gains in student achievement are influenced by much more than any individual teacher. Others factors include:
- School factors such as class sizes, curriculum materials, instructional time, availability of specialists and tutors, and resources for learning (books, computers, science labs, and more);
- Home and community supports or challenges;
- Individual student needs and abilities, health, and attendance;
- Peer culture and achievement;
- Prior teachers and schooling, as well as other current teachers;
- Differential summer learning loss, which especially affects low-income children; and
- The specific tests used, which emphasize some kinds of learning and not others and which rarely measure achievement that is well above or below grade level.
However, value-added models don’t actually measure most of these factors. VAMs rely on statistical controls for past achievement to parse out the small portion of student gains that is due to other factors, of which the teacher is only one. As a consequence, researchers have documented a number of problems with VAM models as accurate measures of teachers’ effectiveness.
I see readmissions, rankings, and the problems arising from patient variability and unadjusted socioeconomic factors. It is almost a match.
Thus, the lessons are twofold: One, other folks are undergoing scrutiny, perhaps unfairly (statistically that is), and two, next time you dive into foreign waters on a subject you know little, get an alternate take. There might be a soft underbelly lurking. Would most of my family believe the merits of a teacher evaluation system–“cause its got to be good?” Yup.
This example is easy, but next time it won’t be—and that goes double in an election year.
Leave A Comment