“Smart Machines” As Philosopher Kings

In press: Technological Forecasting and Social Change, May, 2004

During the course of an informal workshop that I attended recently on the long-term prospects for the planet Earth, a lively exchange was initiated by an astronomer/futurist who expressed the view that the impending development of “smart machines” represents a potential threat to humankind.  Artificial intelligence is evolving at such a breathtaking rate, he argued, that we will very soon create super-intelligent machines that could perhaps turn us into their “pets.” Indeed, this distinguished scientist thought such an outcome was likely; competition and market forces are driving the trend, he said.

Earlier in the workshop there had been a brief discussion of Plato’s idealistic vision – in his great dialogue, the Republic – of rule by specially-trained “philosopher kings.” So I raised this idea as a possible solution to the problem.  Why couldn’t we program our smart machines to be like philosopher kings.  They might then rule over us with detached wisdom and selfless service to humanity – “reason unaffected by desire” as Plato put it?  Smart machines could certainly improve on the performance of our all-too-human political leaders, CEO’s, judges, and priests, I thought.

My proposal was immediately dismissed by the other workshop participants as impracticable.  If machines truly do become much smarter than we are, I was told, they will easily be able to thwart our paltry efforts to control them; eventually they will take over.  My mind flashed on the rogue computer “Hal” in Stanley Kubric’s classic sci-fi movie, 2001, and I imagined a super-Hal that was much better prepared than its fictional namesake for a power-struggle with its human handlers.  Indeed, in “Terminator 3: Rise of the Machines,” a super-intelligent cyborg takes the form of a lithe young female and engages in an epic struggle with Arnold Schwarzenegger.  Or perhaps these smart machines might band together and work in concert, like the ruthless, all-powerful Borg in the TV series Star Trek.  Remember their mantra: “resistance is futile.”  Could Hollywood be on to something?

The conversation moved on, but the issue stayed with me.  After the workshop ended I began to think further about this important problem.  It seemed to me that smart machines are only the latest of many Faustian bargains that we have struck with new technologies over the course of our evolution – from the adoption of fire itself to nuclear power and genetic engineering.  The perennial question is: How can we control a new technology for the good of humankind?  How can we ensure that it does not become destructive, or that the costs do not outweigh the benefits?  Are smart machines any different?  Are the coming super-Hals, or the incipient Borg, likely to instigate a technological coup?

One (hopeful) answer can be found in biological evolution, and in the patterns observed in the natural world.  First, a clear distinction should be drawn between “intelligence” – however fuzzy our understanding of it – and the property of being goal-directed, or “purposive,” or having “intentions.”  Every living organism is “purposive” by its very nature; it has been “designed” by natural selection to pursue the goal of survival and reproduction.  In other words, it has cybernetic properties.  Biologists refer to this purposiveness as “teleonomy,” in order to differentiate between the evolved, internal goal-directedness of living systems and an externally-imposed teleology.  Intelligence in the natural world is certainly not a unique attribute of large-brained mammals, but it is always subordinated to the organism’s internal teleonomy.  Thus, single-celled “smart bacteria,” without the benefit of a brain or nervous system, can gather information in various ways and make discriminating choices.  Even the marine alga Fucus can “sense” and integrate many different sources of information (17 in one study) and can make “decisions” between close alternatives.  Among birds, the relatively small-brained ravens are legendary for their inventiveness and problem-solving capabilities, while such human “pests” as rats and coyotes often seem able to outsmart humans.  Intelligence and brain size are at best imperfectly correlated.

So intelligence, including “technology,” has been serving the evolved purposes of living organisms for billions of years.  Indeed, new technologies have often been the “pacemakers” of evolutionary change.  For instance, some bacteria utilize magnetite as a direction-finder to aid  in their navigation.  The famous Galápagos woodpecker finch uses cactus thorns or small sticks held in its beak as digging tools for insect grubs.  And elephants, it turns out, are masterful tool-makers and users – from fly swatters to back scratchers, clubs and projectiles, even tools for making “drawings” in the dirt.

Perhaps the most important lesson from the natural world, though, is that there is no inherent reason why artificial intelligence must be self-interested, or self-serving.  In fact, natural selection has also produced many “altruists”, organisms that purposefully pursue the survival interests of others – their offspring, close kin, or the members of their “group.”  There is no reason to believe that we cannot emulate nature in this important respect.  If we choose to do so, we can create smart machines that will be our obedient servants, not the other way around.  (Of course, it’s also true that every important new technology has induced changes in our behavior, in our relationships to one another, and in our environments, as many theorists over the years have noted.)

The real danger is that, having created a new class of computerized philosopher kings, we will then reject their wise, benevolent counsel.  If, instead of heeding their warnings (say about global warming), we continue willfully to pursue our often self-destructive personal, economic and political agendas, we may very well muddle our way to extinction.  So maybe the  doomsayers will be proven right after all.  Maybe the smart machines will simply replace us, not because they are the winners of a power-struggle with humankind but because they will ultimately be favored by natural selection over an adaptively inferior human species.  We may yet prove to be just another evolutionary dead-end.  So, if the emerging smart machines are truly smart, they will make it a priority to figure out how to get along without us when we’re gone.  That’s a very different kind of ending from the one that is favored in Hollywood movies.  But then, Hollywood producers (and their audiences) are only human.

Category: Publications