This backflipping noodle has a lot to teach us about AI safety

News

From The Verge:

AI isn’t going to be a threat to humanity because it’s evil or cruel, AI will be a threat to humanity because we haven’t properly explained what it is we want it to do. Consider the classic “paperclip maximizer” thought experiment, in which an all-powerful AI is told, simply, “make paperclips.” The AI, not constrained by any human morality or reason, does so, eventually transforming all resources on Earth into paperclips, and wiping out our species in the process. As with any relationship, when talking to our computers, communication is key.

https://www.theverge.com/platform/amp/2017/6/14/15792818/ai-safety-human-feedback-openai-deepmind

Related:  ‘World’s first robot lawyer’ now available in all 50 states