Will A.I. Ruin Us?
Professor and machine-learning expert Matt Johnson
takes on Skynet, spying and where the technology is going next
Professor Matt Johnson, who has been with Cal State East Bay since 1999, began studying political science at Xavier University at the age of 15. Though his parents might have preferred him to become a lawyer or a doctor, the teenage prodigy was fortunately distracted by video games and the seminal era of computer programming — “punch cards and Pong,” as he says.
After leaving Xavier with degrees in both political science and computer science, Johnson obtained a master’s in computer engineering at Michigan State — where his first foray into artificial intelligence occurred — followed by a Ph.D. in computer science at William & Mary, making him, he reports, one of the first students to complete a doctoral degree in that discipline from the university. Since then, the professor has worked on AI projects such as in-flight pilot automation for NASA, air-traffic control strategies, military surveillance, published a textbook on computational theory, and more. Here, amidst fears of artificial intelligence evolving beyond human control, Johnson answers questions about what he believes are the real concerns with AI. Hint: It isn’t robots falling from the sky.
about the power and consequences of Artificial intelligence span the gamut — Stephen Hawking has
said it could be the end of humanity. Where do you stand?
are afraid of AI, largely because of how it’s portrayed in movies — that it’s
going to take over the world and giant robots are going to come out of the sky. At a cocktail party when someone hears I work
in AI, the first question is always about Skynet. (Laughs.) No, I’m not afraid
There is a contingent of people, however, well-known people, who believe in what’s called “strong AI” (also known as deep learning), which is about getting the computer to actually be intelligent — as opposed to seeming intelligent — because it gets the right answer.
There’s no harm in researching [strong AI] and doing things with it in my opinion, but trying to [get computers to] act human isn’t the right approach. We should be continuing to come up with systems that make the best decisions given the information that they know. That’s not necessarily about being self-aware or conscious or philosophical, it’s just a program solving one particular task.
So AI isn’t about trying to recreate or mimic the way the human brain works?
No, it’s not about trying to build your thinking, feeling artificial friend. And it’s not even about understanding the human brain. To me, that’s cognitive science. And, it’s also incredibly arrogant — humans [don’t possess] the only kind of intelligence. It is, though, about trying to solve problems that are hard and complex, and that are inefficient for humans to solve by their nature.
Your specialty is machine learning?
Machine learning, which is the design and construction of algorithms that can
learn from and make predictions on data. Modeling solutions after observable
phenomena in nature is what I dig most, though, so my main area of research is
genetic algorithms — a problem-solving technique modeled on natural selection
and population genetics that teaches a computer program how to create and run
other programs. It’s very meta, and I think, very cool.
I’ve also done work throughout my career in neural networks, which is teaching machines how to do stuff based on the biology of how humans or animals learn to do things, and I’ve done a lot of work in ant colony optimization. When you see ants scurrying around looking for food, they’re actually doing that based on a firm mathematical model and pheromone tracing — they’re following a scent. You can use that information to send out ‘ants’ in virtual time to see how long it takes to get a reply back that something was received, like an email. Eventually, you’re able to create a picture of the network topology and make adjustments to things like a failed server really quickly.
Privacy issues are huge these days. Does AI play a role in that?
also a fear for some people, that AI is being used to monitor their activities
and knows them. But whatever is being
done or looked at is being done in a singular frame of reference — a single
webpage, a single phone conversation. To link all those things together is
So yes, [AI] has the possibility to watch what I’m doing when I’m shopping online or talking on my iPhone and to respond to that information in a limited context — but AI is never going to be able to create a completely accurate picture of who I am.
What AI can do by itself isn’t the issue, though, it’s what’s done with it that’s the problem; people gathering information about you without your knowledge or ability to control it is ethically disconcerting.
Artificial intelligence is being hailed as the fourth industrial revolution — a recent study by the World Economic Fund estimates a net loss of five million jobs due to AI in the next 20 years.
It’s a real challenge, and I think it will happen a lot in China because they’re investing so much in robotics for manufacturing. When people make decisions about whether to automate something, they’re focusing on a personal bottom line, not what happens to the workers. As a society, the more we automate, the more we have to realize people are getting pushed out of the workforce, and that can’t be the end of the discussion — there needs to be an effort to educate people in the skills that are needed.
And how is Cal State East Bay handling that education?
Well, it’s been pretty well publicized that the number of women and minorities working in Silicon Valley is pathetic. And it’s even more of an issue when you look at the demographics of the region, which are incredibly diverse. These may or may not be people whose jobs could be eliminated by AI in the future, but they’re people who aren’t gaining traction in a career field with huge growth and huge demand, and who can’t even imagine themselves working in technology. There’s literally a million new jobs to be had in the next decade.
What I’m proud of is how diverse the students in the Cal State East Bay computer science program are, and how we buck enrollment trends nationally. Our undergraduate program has more than three times the number of Latino/Hispanic students compared to the national average, and 45 percent of our graduate students are women — that’s about double the average across the United States. We are serving as a model of success for what computer science programs need to look like in this country for the sake of our economic and national security.
So with all the growth that’s expected, what are your predictions for where AI is headed next?
I think the surge in AI startups will continue happening as giants like Facebook and Google invest more and more in AI technology, but as far as specifics, definitely drones. The U.S. Federal Aviation Administration has released regulations for registering drones and is testing technology that could help automate air traffic control — we should see that this year.
And the Internet of Things will continue to expand; all the physical objects, devices, buildings, etc. that are embedded with electronics. Devices will coordinate more, and therefore seem smarter. And this plays directly into more affective computing systems. It’s becoming more and more important for users to feel personally connected to their devices, especially cell phones. Many AI students study psychology, philosophy, and cognitive science, and there’s a big thrust in the field to create programs that understand a user’s mood and desires. You can do that through gathering data on things like what emojis are being used, and even the strength of a person’s keystroke. Happy people type differently than angry ones. And then maybe you have a recommender system that correlates a user’s mood with their search history or something — a little thing could pop up on the screen and offer directions, for example. A lot of things are coming together.
What excites me the most is work on autonomous agents, but not driverless cars. I think real applications are still years away. Smart homes, maybe? My house is remarkably dumb. Or personal assistants. I could really use that — but not one automatically predicated on a female voice.