In this second post about the statistics in ‘Visible Learning’, the author, British mathematician Ollieorange2, asks some uncomfortable questions about the self-correcting capacity of the education science community.
For me, two questions remain:
- If half of the statistics are wrong, how does that affect the recommendations to teachers based on those statistics, and
- How much of the other half is reliable?
Originally posted on ollieorange2.
In an earlier post we discovered that John Hattie had admitted (quietly) that half of the Statistics in Visible Learning were incorrect. John Hattie uses two statistics in the book, the ‘Effect Size’ and the ‘CLE’. All of the CLEs are wrong through-out the book.
Now, I didn’t really know why they were wrong, I thought, maybe he was using a computer program to calculate them and it had been set up incorrectly. I didn’t know. Until I received this comment from Per-Daniel Liljegren. He was giving a seminar on Visible Learning for some teachers in Sweden and didn’t understand some of what he’d found, so, he wrote to Debra Masters, Director of Visible Learning Plus, asking for help.
“Now, when preparing the very first seminar, I was very puzzled over the CLEs in Hattie’s Visible Learning. It seems to me that most of the CLEs are simply the Effect Size, d, divided by the square root of 2.
Should not the CLE be some integral from -infinity to d divided by the square root of 2?”
And if you grab your copy of Visible Learning and check, he’s right! The CLEs are just the Effect Size divided by the square root of 2.
He never received a reply to his letter.
Read more here