Deep Learning: The Promise and the Peril
"Deep learning" neural networks offer us great power - and pose unique risks. Can we solve for this?
A feature documentary (in production, 2017)
If a bank's computer turned you down for a loan but not one of the humans you spoke to could explain why, would that bother you?
If a self-driving car made a lethal mistake, but the engineers who designed it couldn't figure out what caused the accident and how to stop that mistake from happening again, how confident would you feel about buying one? Or driving next to one?
If recruiting software at your firm began systematically screening out minorities -- or women in a given age bracket -- without being "coded" to do so, who would you blame? And how would you stop it?
Neural network-based "deep learning" is a recent triumph of human ingenuity and intelligence, and the feats it is already achieving in a wide variety of fields -- from beating a Go grandmaster to accurately identifying photographs to detecting subtle financial fraud -- are breathtaking. But what fascinates and worries some observers (including me) about the technology is that it makes a computer system's "thinking" independent of specific instructions from its human creators. In a qualitatively significant shift from the "if this, then that" computing that most of us are familiar (and more or less comfortable) with, deep learning systems figure things out for themselves -- and just as we can't simply look into another human's brain to perceive why they made a decision that shocked us, we can't open the "black box" of such systems to understand what made them act the way they did.
The technology that makes them powerful and (within bounds) intelligent, is the same technology that makes them inscrutable. In this feature documentary, I will be examining these issues, and asking scientists, technologists, and other thinkers why these systems work the way they do, where the dilemmas and dangers lie, and how, perhaps, we can fix them.