You are using an older browser version. Please use a supported version for the best MSN experience.

Why I Argued With Arianna Huffington and Yo-Yo Ma Over Human Judgment and Inner Wisdom

The Huffington Post The Huffington Post 19/10/2015 Andrew McAfee
GRADING TESTS © TongSur via Getty Images GRADING TESTS

It might have been good theater, but I really disliked it when I found myself passionately and seriously disagreeing on stage with Yo-Yo Ma and Arianna Huffington.

Disliked it because Yo-Yo is probably the coolest guy in the world. I didn't know him before the Fenway Forum event (actually held in Faneuil Hall because of bad Boston weather) where he and I were both on a panel, moderated by rockstar academic Michael Sandel, called "What's the Right Thing To Do?" But in talking with him backstage before we went on, I learned that he is funny, silly, and wise all at the same time. His music is sublime and I play it when I'm trying to think deep thoughts, but after getting to know him a little bit I can honestly say I would rather spend the evening talking with him than listening to him play his cello. And Arianna is a mensch, the master of many domains, and a true force of nature. I've learned a lot from her and consider her a friend and mentor. She's also on my list of all-time best dinner companions.

So it was more than a little uncomfortable when I saw how much they were disagreeing with something I said during the forum. Their eyes lit up, they leaned forward in their chairs, and their voices became a lot more animated as they joined together to contradict me.

Sandel had just asked the students in the room if they would prefer to have their essays graded by a hypothetical algorithm that was as good as their professors at assessing the quality of a written argument. I told them (at about 1:29:30 in the video):

"I was a professor at Harvard for 10 years, and some semesters I'd have about 180 students and it was a final exam-based course -- a big part of the final grade came from the exam. Students up there: if you think that you're in that kind of course and your professors are spending an hour per exam grading them, I have hard news for you. If you think that they are grading the 100th exam with the same attention they paid to the first, I have really hard news for you. And if you think that if you gave them the exact same exam five years in a row they would give you the same grade on it, I have really hard news for you. So Michael, with your assumption -- with a grading app as good as a professor on any subject -- let me assure the students in this room: if you want to be evaluated fairly and objectively, you desperately want that app."

When I was done I looked over at Yo-Yo and saw that our blossoming friendship was in serious trouble. He clearly thought that my actions and my recommendations were unconscionable.

"I just have to say something, because I actually feel very strongly about this... I'll talk about something I know just a little bit about. Which is, when you have a kid, anybody, playing one note, that can be replicated by anybody else. But ask that same kid... to make the second note happen, the way you actually join the first note to the second depends on the physical mechanism, the neuromuscular structure of the person. And, what they want to hear inside. That path, from one note to the next, is going to be different for every single human being on this planet. The reason we have conversations with people is because you don't know where the conversation is going. And if you have an app -- I don't care how big the data, how great the algorithms are -- it's finite. The idea of the human spirit, who can actually get to something that's beyond the finite, that is I think that's part of every human being. And we want to look for that in every student. And we want to demand that every teacher pays attention to every single student in the classroom."

To explain why I seem to be so against human connection and judgment, I cited the famous study showing that the decisions of Israeli parole judges varied predictably with the time of day: they were much more lenient early in the day and right after food breaks (when blood sugar levels were high) than just before the breaks. The study found, in fact, that there was a close to 0% chance of parole for prisoners who have their cases heard at the wrong time of day.

I stressed that these were societally important choices made by highly trained and dedicated people. And the results were transparently, ludicrously bad. My point was that we needed to walk away from approaches like this that we so heavily dependent on human judgment, and find more objective ways to make decisions.

Earlier in the session Arianna had been on my side in discussions about Uber's business practices (we were both in favor of surge pricing), but I got no help from her here.

"I must say, I absolutely, completely agree with Yo-Yo. I think it's an incredibly important point, because that's what ultimately distinguished machines from human beings. There is something in human beings, however flawed, whether you want to call it a soul, whether you call it spirit, whether you want to call it inner wisdom, there is something a machine has never had, will never have. So for us to leave any important decision, whether it's the grading of a philosophy or poetry paper, or the paroling of a convict, or the choosing of a mate or any of these things to a machine is deeply troubling for me."

Yo-Yo and Arianna just weren't having it. As I tried to press my case, I kept coming back to the Israeli example and asking them if they were comfortable putting the wrong prisoners back on the street simply because of flawed human decision making. By the end of our back and forth, I'm sure I was coming across as a law and order, lock-em-up up wingnut.

So here's what I should have said (with thanks to the Huffington Post for giving me the do-over space that everyone ever involved in a debate craves):

"Yo-Yo, Arianna: from what we've been talking about I know that the phenomenon we're all passionate about is human flourishing -- is ensuring people have what they need to grow and to do great things with their lives. Yet I'm coming across, correctly, as an advocate for more machines and more algorithms in critically important areas -- in fact, especially in critically important areas. Let me try to explain why this is with another example:

In Florida in the Broward County school district, the first step in getting kids identified as gifted used to be nomination by their parents or teachers. These are two groups that obviously know the kids well, that have lots of conversations with them, and devote a huge amount of energy to their success. But both parents and teachers make these assessments subjectively, and as a result gifted classrooms tend to be disproportionately full of white boys. Because after all, the stereotype of a gifted kid is a nerdy white boy. Most students in Broward were minority, but 56% of the kids in gifted programs were white.

About a decade ago the district decided to walk away from the subjective method, and to instead try to be as systematic and objective as possible. They gave every kid in the district a non-verbal IQ test. The results of this one change -- and it was the only change made -- were striking. 80% more black kids and 130% more Hispanic kids were identified as gifted.

There's no shortage of other studies showing the same thing. In fact there so much research on the topic that it almost makes me weep with frustration. There's just no doubt any more: we're being too subjective about important assessments, and people are being left out, left behind, and hurt as a result.

I am not against mentorship or conversation or in-person teaching, coaching, or healing. I am absolutely for all of these things. What I am against is making bad decisions on behalf of others.

When we're making decisions for ourselves we should be able to do whatever we like. But when we're making decisions on behalf of others -- when we're assessing them or diagnosing them or hiring them or grading them -- we have an obligation, I believe a profound moral obligation, to make the best decision possible. Not the one that feels most "humanistic" or "soulful," but the ones that give the best results.

And there's not any debate anymore -- or at least there shouldn't be -- about how to make these decisions. We need to be as systematic, consistent, and evidence-driven about them as possible. And that means doing something uncomfortable but necessary: it means turning away from our current faith in and reliance on experts and their judgment and their gut and their experience "inner wisdom" and turning toward the algorithms and the data.


We'll get the best results by combining what people do best with what machines do best. But right now that balance is skewed way too far over in the direction of relying on human judgment and intuition, and it's hurting us. Maybe by the time those gifted kids of color in Broward grow up, we'll have changed how we think about this. Because it's the right thing to do."

More from Huffington Post

The Huffington Post
The Huffington Post
image beaconimage beaconimage beacon