Tuesday, December 20, 2016

Long-Term Thinking: The Question of Artificial Intelligence

A few years ago Andrew Coyne gave a speech and he talked about why he thought Parliament would be increasingly important and why our consensus on certain issues meant that politics would transform. He argued that the neo-liberal consensus would lead to new debates, debates about the nature of humanity and address the questions that new technology has and will raised. Mr. Coyne seems to have been disproved, at least for now, and my own theory is that the global consensus on neo-liberalism is fracturing. Still, there are a number of issues that the Canadian Parliament should start weighing before we are overwhelmed.

Artificial intelligence is one of my favourite themes in science fiction. Over the last couple of years popular culture has latched onto this concept and a number of films and television series have come out exploring humanity's relationship with artificial intelligence/sentience. The majority of these depictions are negative, or threatening. The public clearly has some anxiety over the creation of artificial intelligence. Writers like Nick Bostrom seem to be suggesting that there are tangible dangers to AI and that precautions are required to protect us. 

As far as I am aware there are no laws governing/regulating the development of artificial intelligence. It would not be unreasonable, for example, to insist that artificial intelligence be developed on air-gapped computers, or that all programs or automatons have a built-in kill switch. The dangers of rogue AI are so extreme that even modest precautions should be accepted at face value.

Beyond paranoia (healthy as it may be) about the development of artificial intelligence there are inevitable questions that will arise if we successfully develop artificial life. If we create independent, autonomous beings as represented in fiction like Westworld, Ex Machina, Her, etc. what rights will be extended to them? Should any? Should artificial beings be treated like biological citizens, or should they be treated like, say, corporations? Corporations are legal persons but they are not allowed to vote and do not exert other rights as living beings. If you kill/disable an AI is that murder, property destruction? Will androids/AI be owned? Is that slavery?

One of the big questions about artificial intelligence is how will we tell if it is real. Artificial intelligence designers may merely create things that are very capable at imitating people, rather than genuine sentience. Then you get into debates about sentience and the nature of humanity's consciousness.

One of my concerns for years is that the creation of androids will exacerbate issues of sexism and inhumanity. When you have the ability to exploit and abuse things that are indistinguishable from humans the threat to broader society seems fairly obvious. Creating intelligent, responsive beings for the sole purpose of our pleasure and violent impulses is unsettling.

Obviously the Canadian Parliament does not need to pass laws on these matters immediately, but it would be wise to start raising these questions and laying some basic regulations to protect ourselves from the worst case scenario. This might be the perfect work for the Senate to take up. As much as this may sound like science fiction, I think the trend lines are fairly clear we're heading in that direction, so why not prepare for it?

No comments: