The State of Play in Artificial Intelligence

First, true artificial general intelligence (or AGI) is still very far in the future, despite the hype from various boosters.  It’s still very much in the speculative stage, with many formidable challenges to overcome, including a better understanding of the human brain, and human goals, values, and ethics – not to mention our many individual differences.

Second, it is also clear that true AGI might indeed ultimately resemble the notorious rogue computer HAL in the classic science fiction movie “2001: A Space Odyssey,” despite dismissive claims of “Hollywood fiction” from some quarters, complete with some analogue of consciousness and an understanding of human emotions, but hopefully without the paranoia. 

In the meantime, though, further advances in AI are coming rapidly – from self-driving cars to Amazon “fulfillment center” (warehouse) robots, instant language translators, facial recognition systems, and Google’s Smart Compose, a “predictive text” which helpfully tries to finish your sentences as you write them.  (Not mine, for the record.) 

As various AI experts have been warning, a major problem even now, much less in the long term, is “control”.  How do we ensure that AI systems are beneficial?  And obedient?   I refer to it as Plato’s dilemma, from his classic treatise on social justice, the Republic. 

It should also be noted that the problem of control has two distinct aspects – how to exercise control, and who will do the controlling.  Whose goals and values will be in charge?  This is ultimately a social and political challenge, not a technical problem.  

Another serious problem, which is of concern to us even now, is how to prevent AI systems from making horrendous mistakes with sometimes lethal consequences (like the Boeing 737 Max).   Likewise, on Wall Street where AI systems have been in use since the 1980s and where it is said that more than half of all trades are now done with AI systems, there have been a number of serious “flash crashes” (as they are called) over the years, including the famous disaster on “Black Monday” in 1987 where the stock market lost 22% in a single day.  Several more such incidents have occurred in recent years, though with less serious consequences. 

There is also the formidable challenge of how to deal with what the AI experts refer to as “edge cases” – the fact that human societies are constantly changing and evolving, with new problems and discontinuities that go beyond our past experience.  We are part of a vast historical process, and novelty is a constant in human life.  It is one of the things that has been confounding the development of self-driving cars, for example. We are still vastly better at adapting to change than any AI system.

Add to this the innumerable moral dilemmas and the difficult, sometimes zero-sum choices that confront us every day, not to mention the spreading cancer of lies, deceptions and “fake news” in human societies.  It’s certainly possible to fool an AI system as well!

There is also the problem of how to choose among conflicting perceptions and opinions.  Which economist should our AI systems rely upon, a liberal economist like John Maynard Keynes, Paul Samuelson, Joe Stiglitz, or Paul Krugman, or a conservative like Friedrich Hayek, Ludwig von Mises, or Milton Friedman?  

Especially helpful going forward was the recommendation included in an op ed piece in The New York Times a few weeks ago by Ifeoma Ajunwa, a law and technology professor at Cornell.  One way to avoid what she called the “tyranny of the algorithm” is to keep the final decision-making power in human hands, whenever possible.  An AI system can propose a course of action, but a human must approve it.  Thus, for example, a dating algorithm may help us to find a mate, but it should not be allowed to become a matchmaker and dictate our choice. 

Finally, I would like to summarize an important point that was made by the neuroscientist Peter Tse at the end of a recent panel discussion on AI at a major science festival.  In his concluding remarks, Prof. Tse noted that existing AI systems deploy a simple, passive model of how the brain works (a neural network) that is more than 20 years old, and that the brain sciences are now converging on a potentially more complex and more dynamic model that could have “revolutionary” implications for the architecture and performance of AI systems in the future as well. 

Very briefly, the current model has a static structure, a multi-layered system of connected pathways (neurons) that allows for impulses of varying “weights” to flow along various routes.   It’s analogous to a set of interconnected roads, where the user makes the choices.  Recent evidence suggests that the neural network is in fact more like a railroad with shunts that determine which impulses flow along different pathways.   So, the same express train could go from Chicago to Boston, or Chicago to San Francisco, depending upon how the “points” are set.

Even more revolutionary is the hypothesis that the neural cells themselves (the axons) are the repositories of information in the brain, not simply conduits, and that information is “stored” in the neurons through a biochemical process called methylation.  We now know that methyl groups (CH3) often serve like catalysts that differentially affect the activities of the DNA in a cell.  

If this turns out to be true, it remains to be seen how this vast new complexity might affect how we build the next generation of AI systems and how they will perform.  It’s still very much a black box with only a few points of light.  And so is the utopian/dystopian vision of a superhuman artificial general intelligence – Hal on Earth.  But stay tuned. 

—————-

A good overview of the state of the art in AI can be found in a recent PBS (NOVA) documentary:

 “CAN WE BUILD A BRAIN”  https://www.pbs.org/video/nova-wonders-can-we-build-a-brain-j53aqg     

Category: Social Justice