Why We Should Be Worried About Computers Taking Over The World

Lee Davy sits down with Adriano Mannino, Chairman and co-founder or Raising for Effective Giving, to discuss their recent presentation during the World Series of Poker, why more people don’t give to charity, and their interest in the rise of artificial intelligence.

When Carnegie Mellon’s computer Claudico took on poker’s finest in a series of heads-up matches, the poker community put down their controllers, stopped playing Super Mario World and were enthralled.

Adriano Mannino, Chairman and co-founder of the non-profit organization Raising for Effective Giving (REG) wasn’t enthralled. He was worried. When he gazed at the contest he saw an algorithm that would never stop learning, then the controllers started moving, Mario and Luigi were kicking Bowser’s butt, and there wasn’t a human being in sight.

I caught up with the sharp-suited effective altruist to talk about the progress of the REG movement, why Elon Musk has given $10m to a research unit created to look into the safety aspects of artificial intelligence, and why I have used a Super Mario World metaphor.

Talk about the REG presentation that you held in the Rio this week!

“It was great. Organizing such things is always a challenge so we were glad it all went well. There were a lot of great players present. I spoke for 30-minutes and we had a Q&A for another 30-minutes. There were some great questions and some wonderful suggestions regarding how we can convey our ideas beyond the poker world by people like Dan Shak, Justin Bonomo and Galen Hall. It was also great to see that so many people were dedicated on a personal level.”

How do you follow up with the players who were present?

“Our main mission is to present the research that we have done: charity evaluation, looking into different cause areas and charities, trying to come up with estimates regarding how many lives particular interventions can save, and to put the arguments out there as to why we should help at all. Our main goal is to put this information out there, i.e. to provide the service of science-based, rational charity evaluation. The rest happens organically. There is no point being overly pushy. People will contact us with ideas they have, and we meet up for further discussions, or we hold dialogue via e-mail.”

REG currently has 159 members. I believe the poker community can do better. What do you think needs to be done to increase that number?

“It would be desirable if that number was 10x bigger. But you have to consider we have only just begun. We had our launch dinner a year ago in Vegas. We had our first matching challenge a few months earlier. Philipp Gruissem and Igor Kurganov donated $50k together and asked their fans and followers to donate the same amount, and that’s what kicked us off. I think it’s a decent result to have 159 sign-ups and continuing our work, putting more information out there, holding more tournaments, and liaising with more casinos will see that number rise. The more people we have on board the more they will talk to other people, and the more will come on board. We hope this movement will keep on growing. We do see fields of thousands of players, and when you compare that with 159 members, it may not seem like too much, but if you think about what fraction that is, it’s a decent fraction. Also a lot of people who haven’t signed up are still involved in other charities and that’s great also.”

What does the research say when it comes to why people don’t give to charity?

“Many people in the psychological thought experiments that one can present will agree that they have an interest in donating to charity and saving lives if they can do so at little costs to themselves, and particularly if it’s emotionally touching. If you present them with a specific biography of a child that needs help, then the majority of people who have money will help. But when the suffering is far away our brains have a harder time relating to that on an emotional level.

“Of course emotions are always needed for action. They are the main drivers of action. If we can’t rely on emotion directly because the suffering is happening far away, then we need a more cognitive and rational approach, so we really need to conceptualize the situation that we are in, and understand that we demonstrably have the power to affect lives at the other end of the globe if we use our money effectively. That’s the main obstacle, trying to create that emotional connection to stuff that’s far away and we are trying to use rationality to bridge that gap.”

I often find that science fiction soon becomes science fact. There has been a spate of movies recently covering AI. Chappie was a particularly good one that I watched recently. In your REG dinner talk you mentioned that Elon Musk has donated $10m to the Future of Life Institute, and they have started to give that money out to different research units. What are people like Musk worried about, and what do you think the research will cover?

“A central part of our approach is that we want to be open-minded, think outside the box and also evaluate potential areas of charity that may not be mainstream and may be non-conventional.

“We have started with poverty and down to earth strategies, but we have also broadened our perspective and asked the question: On a global scale, what sorts of risks and developments could be most disruptive? Climate change is one example of something that has the potential to have globally disastrous effects and that will affect the far future.

“When you have the approach to save as many lives as possible, and reduce as much suffering as possible, then the far future is very important. It seems that far more people will be living in the far future, just in virtue of the succession of generations that are currently living. That’s the background behind why we have started to look into AI as a discussion area.

“Elon Musk has just donated $10m to AI safety research. The main concern is that computer algorithms and machines are becoming increasingly powerful and more intelligent. There are a number of intelligence-related domains and games in which algorithms already outperform humans. Chess is one of them and I recently saw an algorithm outperforming humans in Super Mario World. It was crazy seeing an algorithm figuring out strategies that humans have never uncovered. What we call the “real world” is just a 3D game of greater complexity. AI capability research is advancing rapidly but the safety research is dangerously lagging behind.”

“There are many historical examples of highly consequential technology. Nuclear technology came faster than people expected, and the safety research, and what we could do on a political and legal level, regarding how we could coordinate internationally and avoid arms races, was lagging behind. Musk and people like him are trying to boost safety research at the expense of pure capability research in order to try and guarantee a safer outcome.

“Why intelligence technologies? Intelligence has so far been a biological phenomenon that relates to the behavior of animals, and humans are animals. Intelligence just means the capacity to achieve goals across a varied range of environments. Why are humans the most powerful species on earth? It’s because we are the most intelligent. We can totally dominate the chimps. But if other entities come about that we create artificially and they trump us in terms of intelligence, it’s easy to see that in some decades or centuries down the line they could be in a similar relation to us as we are in relation to chimps.

‘Some people would say: “isn’t that science fiction?” – well, many experts strongly disagree, e.g. the Oxford professor Nick Bostrom in his new book “Superintelligence” , which I highly recommend. But even if you contradict these experts and only place a small probability on dangerous AI scenarios materializing reasonably soon, the stakes are extremely high, so the expected value is always high when trying to address it. We’re collaborating with the Foundational Research Institute (FRI) as well as the Machine Intelligence Research Institute (MIRI) in order to find the most promising donation options to avert disastrous AI outcomes.”

I read that Google’s work on the driverless car needed the input of philosophers because they needed to program the AI so it could make human-based decisions when it came to emergency situations.

“On the road you can have moral dilemmas in the real world. Say your driverless car is heading to a group of kids and they would all die if you didn’t get out of their way. But the only way to avoid the kids is to drive into a wall and you will die. How is the algorithm going to decide in that case? It’s a question that has a philosophical and ethical dimension. Should we in all instances try and minimize the victim count, or look at other principles?

“Then there is the technical side of things. Even if we can agree on the ethical dilemma there is a technical problem on how to reliably program these human values into the car. That’s a good example of a very down to earth AI problem. But the much bigger problem that Musk is worried about is not driverless car algorithms that exist today, he is worried about extrapolated future algorithms that would be much more powerful.

“The worry is that at one point an algorithm will emerge that could beat humans in every cognitive capacity such as science and technological innovation, common sense and social skills. Such a scenario will obviously present severe dangers. We need to solve the problems of what values do we want to program into these machines and how do we make sure they will be stable and reliable? It’s clear that if you have an super-powerful intelligence that is not programmed to respect the wishes and suffering of humans and other creatures, then the outcome could be globally catastrophic.”

[This interview has been conducted by Lee Davy and was originally published on CalvinAyre.com.]