Skip to main content
Skip to main menu Skip to spotlight region Skip to secondary region Skip to UGA region Skip to Tertiary region Skip to Quaternary region Skip to unit footer

Slideshow

Philosophy Colloquium: Cameron Buckner

network diagram in color
virtual event

The third colloquium in the philosophy department's Kleiner Lecture Series supported by Philosophy and Artificial Intelligence features Dr. Cameron Buckner of  the University of Houston, who will speak on "Can Deep Neural Networks Model--Or Even Transcend--the Human Faculty of Abstraction?"

Over the last five years, deep neural networks have accomplished feats that skeptics thought would remain beyond the reach of artificial intelligence for at least several more decades. The researchers who developed these networks argue their success derives from their ability to construct increasingly abstract, hierarchically-structured representations of the environment. Skeptics of deep learning, however, point to the bizarre ways that deep neural networks seem to fail--especially illustrated by their responses to "adversarial examples," where small modifications of images that are imperceptible or incoherent to humans can dramatically change the networks' decisions--to argue that they are not capable of meaningful abstractions at all. In this talk, I draw on the work of empiricist philosophers like Locke and Hume to articulate four different methods of abstraction that deep neural networks can apply to their inputs to build general category representations. I then review recent empirical research which raises an intriguing possibility: that the apparently bizarre performance of deep neural networks on adversarial examples may actually illustrate that when we increase their parameters beyond biologically-plausible ranges, they can use those same methods of abstraction to discover real and useful properties that lie beyond human ken. This might allow these networks to blow past the frontier of human understanding in scientific domains characterized by extreme complexity--such as particle physics, protein folding, and neuroscience--but possibly on the condition that humans can never fully understand the artificial systems' discoveries. I end by offering some guiding principles for exploring this inscrutable terrain, which contains both dangers and opportunities. Specifically, I argue that machine learning here is rediscovering classic problems with scientific reasoning from philosophy (the "riddles of induction"), and that we need to develop new methods to decide which inscrutable properties are suitable targets of scientific research and which are just the distinctive processing artifacts of deep learning.  

Cameron Buckner is an Associate Professor in the Department of Philosophy at the University of Houston.

Support Franklin College

We appreciate your financial support. Your gift is important to us and helps support critical opportunities for students and faculty alike, including lectures, travel support, and any number of educational events that augment the classroom experience. Click here to learn more about giving.