2017 May/June: The Month(s) in Robot Ethics

Newsletters

What robot ethics was about in May and June:

AlphaGo strikes again!

Seems like old news now, but it’s just a month and a half since the Future of Go Summit (also here) in Wuzhen, China, in which AlphaGo convincingly won not only against the best human player (Ke Jie), but also, in a Team Go game, against a team of top-class players who all tried to defeat it together. They failed. So, differently from Deep Blue’s win against Kasparov in the nineties, in this case there’s no talk of unfair play or anything (read a recent interview with the old master here). The ubiquitous Demis Hassabis is also here again in an interesting interview.

Machines are just better at Go than we are. What does this mean for our enjoyment of the game? And are machines actually playing Go? Read an extensive discussion of the Philosophy of AI in games right here on Moral Robots!

Jobs or no jobs?

The discussion on robots destroying and creating jobs has continued at the same breathless pace as in the previous months. There’s even a webpage to tell you when you will lose your job to a robot. Mark Zuckerberg has renewed his call for a universal basic income (see also our previous news on the idea of a basic income). But there has been a change in tone, as new voices have come forward pointing out that perhaps the future does not look at bleak as we thought. There are hopes that robots could close UK’s productivity gap, Marc Andreessen has pointed out that AI will create new jobs, and Intel predicts a USD 7 trillion self-driving future. The question is only who will get those trillions, and it’s likely not the people who’d need them most. Amazon’s takeover of Whole Foods has led to increased discussion about what we can learn from Amazon and our jobless future. Meanwhile, a new global study has focussed on the new jobs that AI will create. At the same time, truckers (obviously not one of the hot new jobs) are working alongside the robots that will replace them. So the jury is still out on whether the future of jobs looks rosy or black. Clearly though, our perception of life in a world without jobs might radically change.

Related:  25% of U.S. driving could be done by self-driving cars by 2030, study finds

Rights or no rights?

The UN summit on AI seemed to agree that AI technologies are the most significant technologies humanity has created, but at the same time, many scholars warn about the possible (probable? inevitable?) impact on AI on our rights. In an entertaining and enlightening piece, Joy Buolamwini points out the hidden biases that are built into the most innocent-looking devices: in this case, the colour profiles of digital cameras: “Algorithms aren’t racist. Your skin is just too dark.”

Doctors, interrupted

AI keeps marching on in the area of medicine. After the past months brought us skin cancer detecting machines and better diagnoses of heart failure risk, now we hear of improvements in automated lung cancer detection. At the same time, Google’s eye doctor gets ready to work in India, and Bristol begins a trial of 3D-printed bionic hands.

Tinkering

The scene for hobbyists and startups who want to get into AI keeps exploding with possibilities. There is a very lively scene creating chatbots, and there are several trending tools out there to help people do just that. Chatbots can even be used as therapists (we’ve been suspecting this since Eliza, but it seems that now someone takes this seriously). Tensorflow hacking has also become easier, with Google releasing mobile-first Tensorflow models, Tensorflow Lite being offered to Android developers, and Tensorflow Mobile being ready for download at GitHub.

Soft skills

Increasingly, AI systems are pushed to acquire soft skills and to prove themselves in areas traditionally reserved to humans: Neural networks can be used to make art, to negotiate, and the hope is that eventually they’ll learn some social etiquette (useful for when they’ll take over your job; then they can do it nicely). Google is further trying to police free speech using AI to censor YouTube. Although it sounds like a nice goal to curb extremism, we must be aware that not every dissenting opinion is dangerous, and dissent, in itself, is one of the fundamental values of every free and democratic society. The line between censoring extremism and getting rid of free speech is extremely thin, and it seems unlikely that machines will have the necessary sensitivity to do the job right.

Related:  2017/03: The Month in Robot Ethics

Uber and Alexa

To close on a low note, Uber, although not yet recovered from its adventures, seems determined to go on making self-driving cars. And, finally, that thing sitting in your living room listening to your every word reveals its dark soul: ads are coming to Alexa, as everyone always suspected they would.

 

With that, it’s probably better that we end here. Thanks for reading, and an exciting July to all!

— Andy@moral-robots.com, http://moral-robots.com