The Unfinished Parable of the Sparrows It was the nest-building season, but after days of long hard work, the sparrows sat in the evening glow, relaxing and chirping away. “We are all so small and weak. Imagine how easy life would be if we had an owl who could help us build our nests!” “Yes!” said another. “And we could use it to look after our elderly and our young.” “It could give us advice and keep an eye out for the neighborhood cat,” added a third. Then Pastus, the elder-bird, spoke: “Let us send out scouts in all directions and try to find an abandoned owlet somewhere, or maybe an egg. A crow chick might also do, or a baby weasel. This could be the best thing that ever happened to us, at least since the opening of the Pavilion of Unlimited Grain in yonder backyard.” The flock was exhilarated, and sparrows everywhere started chirping at the top of their lungs. Only Scronkfinkle, a one-eyed sparrow with a fretful temperament, was unconvinced of the wisdom of the endeavor. Quoth he: “This will surely be our undoing. Should we not give some thought to the art of owl-domestication and owl-taming first, before we bring such a creature into our midst?” Replied Pastus: “Taming an owl sounds like an exceedingly difficult thing to do. It will be difficult enough to find an owl egg. So let us start there. After we have succeeded in raising an owl, then we can think about taking on this other challenge.” “There is a flaw in that plan!” squeaked Scronkfinkle; but his protests were in vain as the flock had already lifted off to start implementing the directives set out by Pastus. Just two or three sparrows remained behind. Together they began to try to work out how owls might be tamed or domesticated. They soon realized that Pastus had been right: this was an exceedingly difficult challenge, especially in the absence of an actual owl to practice on.
Nevertheless they pressed on as best they could, constantly fearing that the flock might return with an owl egg before a solution to the control problem had been found.
The
Unfinished Parable of the Sparrows is how Nick Bostrom opens Superintelligence: Paths, Dangers,
Strategies. The parable highlights the danger posed to us all: the
unregulated, uncontrolled development of artificial intelligence leading to
superintelligence. Here superintelligence refers to a being whose intelligence
is many, many fold that of a human being/all humanity. Bostrom makes some
powerful analogies in the course of his work to portray how out of our depth we
are as we press towards creating
artificial intelligence. The author compares our development of A.I. to
children playing with armed nuclear weapons. The potential danger is so high,
the destructive potential so massive, and the ignorance/naivety so great as we
venture forward.
The
fundamental problem that Bostrom is seeking to explore is the control problem.
The control problem is the question of how do we control an intelligence vastly
superior to our own. I'm sure to some the problem might seem relatively minor,
but perhaps to create my own analogy, humans dealing with superintelligence may
be far more the equivalent of a toddler (or animal) dealing with an adult
human. Think not just of the size and power differential, think of the complexity
of thinking, tools and innovation at their disposal. Now add in the possibility
this adult doesn't care for the welfare of the child. Bostrom paints a
horrifying series of vignettes to make his point.
A.I.
has fascinated me for years, but in fiction. It's possibility in the real world
gives me chills. I don't quite believe the nightmare depictions are wrong, or
if they are wrong, they merely humanize machine intelligence too much and
overestimate human capacity to overcome it. Machines are not humans in waiting,
they will almost certainly be something else entirely.
The
book is divided into fifteen chapters, but they can overall be subdivided into
a couple of sections. The first explores the history of artificial intelligence
and the current state of things. The next explores what is superintelligence
and how might it manifest. Then the book explores the topic of controlling
artificial intelligence through a series currently understood ideas, and how
they may fail to our intense misfortune.
Bostrom
makes a compelling case for why there exists a risk. Currently research is
growing towards self-improving intelligences/programs. As humans tinker with it
there may come a time when the algorithm will adapt faster and improve itself
more than the humans programming it. With the way machines think, act, and
learn it is possible that in the morning an intelligence will be a simple
program and end the day many magnitudes more intelligent than a human. It is
possible that if proper safeguards are not put in place that the A.I. will
breach its cage before its guardians even realize it has that potential. For an
A.I. to protect itself and continue its directives it may learn and expand into
new skill sets and abilities. It may hide itself, manipulate its 'masters' and
overcome whatever limited barriers humans decipher. Or, constraints placed on
superintelligence to keep us safe may make it close to useless. Bostrom also
discusses the paths we may take to creating superintelligences, including brain
emulation, which I found fascinating.
But
the superintelligences may not even have to defeat us, we may defeat ourselves.
A subtle theme that runs through Bostrom's book is that human ineptitude,
paranoia, short-sightedness, and competitiveness may fuel our own disaster.
There only needs to be one dangerous superintelligence to end human
civilization as we know it. Free market capitalism and geopolitical competition
both mean that secretive, reckless plans to develop A.I. are not inevitable,
but likely. Do we trust Google, Apple, the American military and China to take
all the precautions needed?
All
of our ideas of how to control a superintelligence have loopholes a mile wide.
Even our simple instructions to A.I. could fail us. Human interaction and
socialization means that we have cues and taboos that restrain us that are
rarely expressly stated. To borrow an example from the book: Imagine I ask you
to make me smile, you may tell me a joke. An A.I. may paralyze my facial
muscles to keep my face in a permanent grin. Or, it may realize the meaning is
to stimulate pleasure/happiness. So it wires into my brain a stimulant to my
dopamine centre and I live in a blissful coma. These are not unrealistic fears,
they are predicated on the extreme, maximizing logic of a machine without
humanity. Teaching values, teaching all the nuance would be incredibly
difficult, especially if a badly engineered A.I. is the one that takes over.
The
language of the book is incredibly dense. It is definitely written with a
highly-intelligent reader in mind. There were subsections where I merely had to
get through it because my general comprehension was not there. However, the
parts where I did connect, or Bostrom's simplified explanation of the issue often
resonated. I found myself grappling with the ideas posed in this book long
after I put it down. It is undoubtedly a challenge for a layman, but those
curious about this topic may enjoy a deep dive.
I
apologize if the review rambles, but the book offers so much to process and
consider it is difficult to lay it out coherently. This brief video lays out a summary of the material for you to consider. Like the sparrows, we are far closer to capturing the owl than knowing how to
control it. Some accident or misfortune may breed an A.I. without our knowing
or control. After which we will be reliant on benevolence from a god of our own
creation.
2 comments:
Easily my favorite line in this review was, "Machines are not humans in waiting, they will almost certainly be something else entirely."
A wonderful turn of phrase.
You clearly found this book well worth diving into. What were some of the parts did you find where you had to push through due to lack of specific knowledge that would create comprehension? Are there any other pieces or books you've read that would be good companions to it?
Thank you, Hannah. Periodically I manage to string together a few words nicely.
I think the main category of areas I found difficult to understand could be technical explanations of computer and software engineering, programming or theory. While I think I have a rudimentary understanding it sometimes gets very technical.
Example: "Neural networks and genetic algorithms are examples of methods that stimulated excitement in the 1990s by appearing to offer alternatives to the stagnating GOFAI paradigm. But the intention here is not to sing the praises of these two methods or to elevate them above many other techniques in machine learning..." That was drawn from a page at random, but seemed to fit what I meant.
Nick Bostrom wrote an article about an A.I. turning the Moon into paperclips that may be a good starting point. I have not read about A.I. in an non-fiction sense. I would recommend films like Ex Machina though.
Post a Comment