Sean Carroll, a physicist and professor at CalTech, will be joining SXSW this year as a panelist discussing artificial intelligence and our place within the universe. The below is an edited transcription of our conversation on that topic and what he hopes to inspire as a panelist at SXSW Interactive.
As a physicist, could you tell me about the perspective you’re going to bring to your panel: What AI Reveals About Our Place in the Universe?
As a physicist, I care about philosophy and the wider way in which things fit together. I do have thoughts about what it means to be intelligent, and whether or not things that aren’t actually human could eventually become intelligent in the same way that human beings are. Namely, I think there’s no obstacle, in principle, to having artificial intelligence be very similar—indistinguishable at least—to human intelligence. I don’t think there’s anything special about the human mind that can’t be duplicated or imitated in a computer. At the same time, as a scientist, I am very impressed that we human beings arose biologically as a result of billions of years of evolution. The brain is not a blank slate, it’s a machine in which its parts fit together in a specific way just like our bodies. Since we invented computers, as human beings we tend to under-emphasize the fact that how we think depends not only on our brains, but on the fact that our brains are inside bodies. Our bodies give us incentives: desire, fear, food, sex—we have all sorts of instincts and motivations that have accrued over these billions of years and we don’t have to put those same motivations into a computer. We could, we could try. I have long thought that in getting these computer innovations off the ground, we should give them the fear of death, that would make them sharpen their game. But we don’t have to! So that implies to me that even if we could potentially build computers to mimic human intelligence, it’s not obvious that we should or that that would be the best way to construct artificial intelligence. We could construct intelligence that’s entirely different, but nevertheless equally good or better than human intelligence.
As we approach the creation of AI, I often wonder if our planet is not just a biological experiment created from some other entity in the universe. What are your thoughts on our existence?
The simulation argument is that we could be someone else’s experiment. I don’t quite buy it—I think that in some sense, to be honest, the universe that we see is too big and too good to be a computer simulation. But it could be, because there’s no way that we have of disproving it. My question would be: do we have affirmative reason to actually believe it, or is it just a fun way to think about things?
I’ll be interested to see if you cover that topic at the panel! In terms of your fellow panelists, have you collaborated with them before?
No, this is all pretty new to me. I’ve never met them, and I’ve never done anything quite like this before. I’ve done all sorts of weird things, but never something quite so focused on AI. I’m growing more interested in the subject, so I thought it was a fun opportunity to learn more and see what people have to say.
I recently watched your Stephen Colbert interview, not as focused on AI, but still very fun! In terms of AI, would you say you’re optimistic or weary for the future?
Yes, we had fun. Absolutely both! I’m very optimistic in terms of what AI could potentially do. In many ways, more optimistic than the AI experts because I think if you spend all of your time working on AI, down to the nitty-gritty, there’s no question that ever since the 1960s the reality of AI has been disappointing compared to the hype. It’s a much harder problem than what we thought it was going to be. So I think if you’re an expert and banging your head against these real world problems, progress can seem very slow and you can be very knowledgeable and aware of the difficulties in making progress. But that doesn’t mean that over the timescale of ten or twenty or one hundred years there won’t be incredible breakthroughs that could change everything.
So I give enormous respect to the people who have gotten their hands dirty trying to make this field better and better, but just because they’re so aware of the difficulties we currently face, they overestimate the ultimate difficulty–just like physicists several years ago probably overestimated the difficulties of getting to the moon or something.
Right, we don’t know what the problems will be.
Exactly. It’s hard to predict the future, especially when it’s not a matter of an absolute law of physics getting in your way, it’s the fact that it’s hard. We don’t know what to do; therefore I’m also worried. I do think that there are dangers associated with AI that we haven’t thought through carefully yet. It’s remarkably difficult to raise that issue. It’s funny, when you mention that there might be difficulties—famous thinkers have mentioned this, Elon Musk, Steven Hawking, and other people—you get so much push back! They say, oh you’re watching too many sci-fi movies, or AI can’t even do so much as turn on a light bulb without a fifty percent failure rate, it’ll never do anything bad. I interpret what they’re saying as, look we don’t know what the capabilities of these new technologies are, we know that we’ve built this highly interconnected, technologically dependent world where if we lost electrical power in the world for a week, how many millions of people would die? Here in LA, we worry about earthquakes. People worry about buildings falling down, but that’s not the worry! The worry is if there’s a real earthquake on the San Andrea’s fault, all of the water lines and electricity lines and phone lines are cut. So LA would be cut off from electricity and that would be the real bad news. So, you can’t tell me that it’s not conceivable that artificial intelligence, virus, or malevolent computer software could wreak havoc on our systems somehow. And it’s at least worth taking those dangers seriously.
Right. I often think about AI in terms of how it’s going to fit in society. Namely, the amount of jobs that AI would take over, the type of jobs it would take out first, and therefore the type of people that it would leave jobless. I think that would be a crucial problem that we would feel right away.
In the sufficiently long-term, we actually get to turn that problem into a benefit. Namely, much of the work that gets done by human beings, big and small, will not be done by human beings; but the productivity and the wealth creation will still be there, therefore we will decouple earning money from doing work. We can free people to do what they want to do, not slaving away at a job just to earn a living. That’s not next year, or maybe not even in my life-time. The worries about job loss, they never actually come true. The worries about job loss due to technology make sense if what you’re worried about is a person right now, living somewhere who will lose their job. The total number of jobs has not really gone up or down because of technology—they just change what they are. That makes life very difficult, if you’re a 60-year old person who only knows how to do one job and that job has disappeared—that’s a real problem. The problem that driving trucks won’t be a job anymore, that’s not a real problem, that’s something we’ll find other things for people to do.
There’s a pretty big gap between people in terms of who understands our universe—how small and insignificant we are—and those who don’t. In terms of where our planet is headed, do you think that should be a concern of ours, bridging that gap to help people understand where we’re headed?
Yeah, as a scientist who cares a lot about making science clear and communicated to as wide a number of people as possible, I think that’s true in all areas of difficult, scientific, and technological knowledge. In fact, the same thing is true for knowledge of art or music or philosophy. We have this paradigm in the modern world where you go to college until you’re 22, then you don’t. One of the great things about technology is that it has pulled the rug out from that paradigm—you can sit down and take a college course whenever you want! For free, online, and with real professors! But not many people do. We haven’t quite caught on to this miraculous new world that we live in. I would love to be part of a world where talking carefully and passionately about difficult ideas is just what we do all the time. There’s no reason why science, and art, and technology can’t be talked about with the same breadth as movies and TV.
What do you hope the takeaway will be from your panel?
I think on a panel like this, the real hope for a takeaway would be some sort of provocation. Not a piece of information that says here’s how things will be in 20 years; rather, here’s a set of scenarios for how things might be, what are you going to do about it? What can we do to make certain things come true, to stop others from coming true, to prepare for it, to think about ways to do even better. I hope to inspire people to think about a multitude of possibility, rather than lay down a prediction.
Attend the panel: What AI Reveals About Our Place in the Universe
Friday, March 9, 2018
11:00 a.m.- 12:00 p.m.
Featured image: Twitter