March/April 2017

Preparing for Artificial Intelligence Before It’s Too Late

By Adam Benjamin

Perhaps the question is better phrased this way: What happens when intelligence and self-awareness are no longer traits that belong exclusively to the natural world? Other animals display varying degrees of intelligence, and some are even capable of self-recognition,[1] but how can we prepare for a world where intelligence shifts from a natural phenomenon to an artificial one? Do we even have the cognitive tools to be able to imagine what such a world might look like?

In most of our storytelling, these kinds questions are dramatized, most often catastrophized. The singularity—the moment when machines reach true intelligence and self-awareness—is synonymous with judgment day. In a select few tales, it might instead mean the start of a new, seemingly prosperous era, but more often than not, we view machine intelligence as the start of humanity’s downfall.

Such thinking vastly oversimplifies the subject.

Artificial intelligence is not a switch that will be flipped from “off” to “on.” It is a variable slider, which is constantly being pushed forward from “less” to “more.” And as we move that slider, our society will have to adapt, thinking through large-scale, complex implications and determining how we need to approach the changes. The process will not be easy, and it will not be fast—which is precisely why we need to begin forecasting and preparing now, before the pace of innovation exceeds our ability to catch up … if it hasn’t already.

This piece will examine artificial intelligence from several vantages. First, we’ll look at the technology perspective: What is happening today, and how do we expect the technology to evolve in the future? Then we’ll look at the societal side of AI: What might this technology mean for larger social structures? Finally, we’ll examine artificial intelligence from a policy perspective: What kinds of questions do we need to be asking now to prepare ourselves for a future where machines might have more control over our lives than we do?

We hope to provide a broad blanket of information that might serve as a foundation for the types of discussions we need to be having right now. But that all begins with one important question…

What, exactly, is artificial intelligence?

Defining AI

The term “artificial intelligence” is easy to misinterpret. Part of the problem is that artificial intelligence seems self-explanatory: thinking and processing (“intelligence”) that is designed and constructed (“artificial”). In other words, AI is just a smart machine, right?

Not quite. And here we find another part of the problem: We tend to think of technology in terms of hardware. But “intelligence” is not synonymous with “brain,” so it would be a mistake to think of AI as a particularly clever computer or device. Rather, at least for the purposes of this discussion, we’re talking about systems and programming—the ability to think and, ultimately, understand.

The phrase “artificial intelligence” doesn’t refer to a smart machine; it refers to that machine’s ability to perceive, reason, and learn, ideally using those processes to improve its problem-solving abilities.

The applications of AI are virtually endless, but we should not conflate intelligent systems with their role in, say, helping car cameras distinguish potentially hazardous situations. Artificial intelligence—or at least some prototypical version of it—exists in the systems that allow the car to “see” the road and distinguish the car driving in front of you from a pedestrian rushing out into the road. The car itself is just another piece of technology that takes advantage of artificial intelligence. So when we talk about AI, we’re talking about the programming in the computer, rather than one car, or even a line of cars that are beginning to make decisions.

Pop culture can also offer us tools and frameworks for thinking about artificial intelligence. Perhaps one of the most well-known examples of AI in pop culture come from the Terminator movie franchise. The vision of AI is relatively straightforward—and quite grim. An artificially intelligent program named Skynet is activated by the military in an attempt to secure peace. However, Skynet decides that humans are an inherent threat to peace, thus the most efficient way of safeguarding the world is to destroy all humans. (The U.K., apparently, saw no irony when it named its fleet of military communications satellites Skynet.[2])

My point isn’t to reinforce fears that artificial intelligence will be the downfall of humanity (AI experts are doing plenty of that themselves—just ask Stephen Hawking, who issued an alarming opinion on AI just two years ago: “The development of full artificial intelligence could spell the end of the human race”[3]). Rather, it’s to illustrate how Terminator’s Skynet was an artificially intelligent system, not just a single machine. It was a program that was given a task (vaguely, “maintain peace”), presumably ran through large amounts of data and possible scenarios, and came to a conclusion: Humans are the biggest obstacle to peace.

Today, we’re attempting to build AI systems that perform many of those same tasks, just on much smaller scales.

The biggest trend in artificial intelligence right now is deep learning, a process by which a system examines information through several different layers to determine whether that information fits into a particular schema. Deep learning systems, sometimes called neural networks, allow an app like Facebook to recognize individual people in photos. The lowest levels of these networks look at the simplest pieces of information. Usually, these are basic shapes and fragments of edges—a small selection of the overall picture. Each higher level looks at information of increasing complexity, until you reach the highest levels, which take all of that data in aggregate and use it to determine whether this is a picture of you or your best friend.

But here’s the catch about deep learning: These systems aren’t programed to break things up that particular way and use that process to come to their conclusions. The systems are teaching themselves to do it. As Fortune magazine described the process:

“Programmers have … fed the computer a learning algorithm, exposed it to terabytes of data—hundreds of thousands of images or years’ worth of speech samples—to train it, and have then allowed the computer to figure out for itself how to recognize the desired objects, words, or sentences.”[4]

In essence, these AI programs are figuring things out on their own. It’s like giving someone a radio and watching them take it apart to figure out how the radio works. Except, with computer systems, we can give them huge numbers of metaphorical radios to disassemble. With each radio, the system better learns the concept and how to recognize it.

Cloud computing—the use of many connected servers for data storage and processing—is another major trend in artificial intelligence. Localized computing is restricted to the amount of data on any one drive or server, so it limits the information machines have available for learning. It’s like trying to learn about a new subject while only using books from your friend’s personal collection. Cloud computing vastly increases the amount of data available. In our analogy, it’s the equivalent of expanding your resources to a large network of well-stocked libraries. As people create larger and larger quantities of data—more pictures, emails, videos, etc.—more companies are relying on cloud storage solutions to host that data, which in turn creates larger reserves of data for AI systems to learn from.

With that as a foundation for understanding artificial intelligence, it’s time to turn to the bigger and more complex questions: What’s coming next … and how can we possibly prepare for it?

AI’s Evolution

One of the challenges of preparing the world for an artificially intelligent future is that developments in AI are fundamentally difficult to predict. That’s not to say we’re completely blind to what’s coming down the road, but it’s a little bit like forecasting weather: We can only see so far ahead, and small, unexpected shifts may drastically change that forecast.

There are some major developments looming on the horizon, though. One is known artificial intelligence-as-a-service. The “-as-a-service” tag is the popular nomenclature for various cloud computing services that companies can purchase at different scales according to their needs. Infrastructure, platforms, and software are the three pillars of cloud computing services, and artificial intelligence seems poised to join them. Companies will be able to purchase the problem-solving power of artificial intelligence programs, giving them an edge over competitors who are still trying to problems with old-fashioned human brainpower.

Another development that ties in closely with AI-as-a-service is conversational technology. So far, our ability to communicate with machines has been restricted to coding and whatever natural language we’ve been able to program. But natural language processing is already on the verge of hurdling long-standing barriers: Just look at applications like Apple’s Siri and Microsoft’s Cortana, or Google’s new Home products. In just a few years, these apps and devices have all made startling advancements in machines’ ability to understand human language. The popular Skype application can even translate between eight spoken languages in real time, although the feature is still in development. [5] Artificial intelligence allows these applications to rapidly study and continually refine its communication abilities, learning to better identify and respond in natural language over time. And these are only the first steps toward real, widespread conversational technology—the kind we’re used to seeing in science fiction, like Star Wars’ C-3PO.

Beyond these developments, though, the future of artificial intelligence is cloudy. While that might seem like a good reason to hold off on discussions of policy until later, it’s actually a compelling argument for why we need to start those conversations now: Because once those changes begin to take hold, we have no idea how quickly they might outpace our ability to adapt.

Before we explore the policy implications of these potential changes, though, let’s address the matter of timing. Skeptics will probably be quick to question exactly how fast advances in artificial intelligence can actually ripple through society. After all, technological advancement is a slow, iterative process, right?

The answer is yes, it has been—but that doesn’t mean it always will be. The history of technological innovation has been paced at the rate of human understanding and innovation. But artificially intelligent systems can accumulate knowledge and iterate almost unfathomably faster. We’re already building machines that can program themselves—that is, essentially, how deep learning works; the program is teaching itself to recognize certain kinds of information—so it’s not a particularly far jump to imagine a point at which AI programs are creating even more efficient program-creating systems. Within a few generations of these systems (which, again, should appear increasingly quickly), the speed and complexity of programming will be fundamentally outside the range of human capacity. In other words, we may not be far away from creating a machine that builds new programs that are outside the boundaries of human comprehension. And because computer programs can test and process information so much faster than humans can, those programs will be able to analyze the results and refine their creations with alarming efficiency.

Imagine this scenario: Two people are asked to write a story. One person is given pen and paper, while the other receives a computer with a word processor. Assuming basic typing skills and similar amounts of time needed to invent the story, the person with the computer will finish writing the story sooner—typing is generally faster than writing things out by hand. Then, imagine each person is asked to revise the story, adding new sections and resubmitting the entire piece. Now the comparison is stark: The person with the computer can delete bad sections and easily insert new ones where they need to be. The person with pen and paper must write everything out again, inventing new sections along the way. Each new revision widens the gap between the writer with the computer and the one with old-fashioned writing implements. The person with the word processor might finish the sixth revision while the other person is just beginning the third draft.

That is essentially the situation we face with artificial intelligence—we are stuck with pen and paper, while machines can create, analyze, and adapt programs at an alarmingly fast pace. Which is precisely why we can’t wait to begin discussions of policy: If we adopt a reactionary attitude, we’ll be so far behind technology, we may never catch up.

Consider this: Businesses have spent much of the past decade adjusting their policies for a world where nearly all employees own a smartphone and want the freedom to choose their own devices. It simply wasn’t something most companies were prepared for, and chief information officers and other IT managers were forced to spend a lot of their time drafting up solutions for the problem of smart device proliferation—time that could have been spent solving other informational problems.

And, while managing apps and informational outflow through an ecosystem of hundreds of different devices build to varying specifications was certainly a headache for many companies, the evolution of artificial intelligence is likely to have even more extreme and widespread effects.

Societal Effects of AI

The example of smartphones and tablets in the workplace is only one rather limited example of how technology affects our daily lives. The history of technology causing ripples—and sometimes violent quakes—through society is long and storied, although the recent pace of technological advancement seems to have dulled us to this relationship.

Let’s take a quick look at some of the highlights: The wheel dramatically changed the transportation of goods, both in speed and quantity. The invention of the printing press in the 15th century allowed information to be reproduced in large quantities and spread throughout the world. The internal combustion engine was the foundation for most modern forms of transportation. The light bulb—a significant improvement in lighting over the candle—revolutionized productivity by allowing people to work late into the night, and it also led the charge to bring electricity to our homes. Then, of course, there’s the internet, which makes the printing press look shamefully primitive in its ability to spread information across the globe.

Again, those are only some of the biggest highlights, not accounting for similarly powerful advancements in medicine and other fields of study. But it serves as a crude sketch of how one development can alter the fabric of society (just try to imagine the contemporary world without the internet). Artificial intelligence is, by all appearances, on a very similar path, which means that we have no time to waste when planning policies around the technology.

Rather than attempting to predict entirely new applications of AI and how those unforeseeable developments might affect society, let’s instead look back at current developments and use those to extrapolate other potential changes.

Perhaps the most appro­priate place to focus is the deep learning, or neural network, technology described earlier. This technology allows programs to analyze information on various structured levels, with each level looking at that information from a different perspective, allowing the program to “teach” itself to identify particular kinds of information. Deep learning has led to major advancements in image recognition, and as a result, has started a deep learning arms race among tech companies. Facebook, Google, and Microsoft are all on the forefront of neural network technology, and each is competing for business in a variety of sectors with a stake in better image recognition: Car manufacturers want better adaptive response technology, security firms benefit from better facial imaging, military intelligence can gather more accurate information from safer distances, and even medicine can use the technology to diagnose problems earlier and with fewer errors.

In other words, artificial intelligence is quickly becoming big business.

But AI could become more than just an important corporate advantage. In fact, we could be on the cusp of a world where AI technology is actually just the cost of entry for a functional business, let alone a thriving one. Imagine being given the option between two cars, completely identical except for the fact that one has technology that enables it to recognize and respond to dangerous situations, raising its cost by $3,000. Will most people be willing to risk bodily harm to save a few thousand dollars? More importantly—how long will car manufacturers continue giving consumers the option, when they can simply install the technology as a standard feature, like power steering? AI-driven advantages will become increasingly crucial for business success, which could lead to a culling of companies that can’t keep up.

But even that assessment only scratches the surface of how AI might alter our societal fabric. Sure, efficient neural networks can offer the highest bidder an immense competitive advantage in the markets, and investments in artificial intelligence might begin to take precedence over all other forms of capital expenditure. But look at the bigger picture: Companies may come to depend on the advantages offered by AI and ruthlessly cut anything less efficient. If computers begin to diagnose broken bones faster and more accurately than humans, what happens to radiologists? The mechanization of labor has largely been restricted to manual labor, but what happens when it starts to affect jobs that depend on intellectual labor? Exactly how many jobs can artificial intelligence do better than human workers—or is there even a significant limit?

And that’s one of the more concrete examples. Our very concept of society might have to adjust to accommodate the ascendancy of artificially intelligent systems. If we create programs with “true intelligence” akin to human intelligence, must we then consider those programs as people? Do they have rights and deserve protections like everyone else? These considerations frankly deserves their own separate discussion, but they serve as an illustration of the scale we’re talking about.

Preparing AI Policy

We need to immediately start thinking about the specific ways technology will sway and change society so that we may plan appropriate policy to prepare for such a future.

But how do we prepare for a future that is, by most appearances, highly unpredictable? We can start by looking at current trends and the questions they raise, and then look for patterns and broader considerations that arise. At present, a few questions seem particularly worth exploring: First, who is responsible for the decisions made by AI programs? Second, should we be designing policies that provide oversight for the development of artificial intelligence? Third (but far from finally), exactly what role do we want AI to play in society, and are there processes we want to exclude it from?

To be clear: The intention here is not to find answers for these questions in the confines of this article. Rather, we should hope to examine the issues, consider different angles from which we can explore them, and create a stepping stone for future discussions and, eventually, policy decisions.

Let’s begin with the question of responsibility. As mentioned earlier, AI systems have already made their way into car computers for the purposes of analyzing driving performance and assisting drivers in averting danger. At some point, however, these systems will ultimately be forced to make a decision that prioritizes one party’s safety (e.g., the passengers of one car) over another (e.g., the passengers of another car).

Perhaps a driver is passing through a green light at an intersection when another car turns through a red light, jumping in front of the first car. That car’s driver assistance program may suddenly stop the car to avoid a front-end collision. But what if doing so causes a rear-end collision? Who is responsible for the results of that decision? Is it the driver who purchased the car—does she assume responsibility by adopting the technology? Is it the car manufacturer, responsible because it produced the machine (the vehicle) that created the situation? Or is it perhaps the company that programmed the AI system—assuming that company is separate from the car manufacturer?

Holding the car owner responsible might affect consumers’ willingness to purchase cars with driver assistance technology, but placing that responsibility with the car manufacturers or AI programmers may have different effects. While there are valid points to be made for any of those scenarios, the important thing is to determine which party is responsible for the computer program’s decisions—especially if those choices may have implications on other people’s safety. The answer to this question may also inform situations like drone usage—are AI-controlled drones fundamentally different than driver-controlled cars with driver assistance technology?—and other artificially intelligent programs.

And the matter of liability is just one small piece of a very large puzzle … a puzzle whose pieces are very cloudy and whose picture seems to be constantly changing. That’s why we may need to be more proactive about creating oversight for the development of AI. Instead of trying to catch up with artificial intelligence after the fact, we may be able to get ahead of problems by creating policies that regulate AI systems as they’re being developed. Such a process would necessarily involve big, theoretical questions about the role we want artificial intelligence to play in society, and—perhaps more important—the roles we don’t want it to play. Are there particular aspects of the human experience we don’t want to be influenced by artificially intelligent programs?

For example, does AI belong in judicial systems? Would we be comfortable (or ethically justified) allowing a computer program, no matter how smart, to determine a person’s criminal guilt or innocence? Do we want artificially intelligent programs playing any role in our legal system?

As we develop AI to become increasingly efficient at processing and analyzing data, we must think ahead about areas of society that are most at risk of being dramatically altered. A robust AI policy plan will need to outline major areas of concern while still allowing expansion or refinement in the future as the path of artificial intelligence becomes more clear. Such a policy can only be built over time, with great effort and the collaboration of many experts and stakeholders. And the longer we delay those discussions, the greater risk we place upon ourselves.

We Need to Act Now

Elon Musk, founder of Tesla Motors and Space Exploration Technologies Corporation (SpaceX), addressed the possible societal impacts of artificial intelligence last year. “There is a pretty good chance we end up with a universal basic income, or something like that, due to automation,” Musk told CNBC, later adding, “there has to be some improved symbiosis with digital super intelligence.”[6] Whether that’s a cautious warning or a call for optimism about our robotic future depends on your perspective—a universal basic income would involve enormous changes to world economies, but then again, artificially intelligent systems may do the very same. Either way, it’s a clear indication that Musk sees AI playing a very large role in our future society.

There’s still a very large debate going on in the artificial intelligence field between people who believe “true” AI is ultimately a mythical concept that we’ll never produce and people who believe it is both very possible and very likely catastrophic. The former group claims that fears of a menacing, destructive AI program have no basis in reality—that machine learning is fundamentally different than true intelligence, which means that computer programs are incapable of sentience. The latter group believes that our human brains may be unable to even imagine the kind of intelligence we may be creating, and so we must be very careful about the kinds of programs we develop and the kinds of moral, human values we put into them. As Paul Ford explained in his profile of artificial intelligence in the MIT Technology Review in 2015: “We’re basically telling a god how we’d like to be treated.”[7]

There’s no way to be certain which scenario we will ultimately face. However, that doesn’t mean we have to wait around before we start adapting.

Indeed, by most accounts, we cannot possibly afford to wait. 

 

ADAM BENJAMIN  is a freelance writer living in Seattle.

Endnotes

[1] “Mirror Test”; Science Daily. Accessed Jan. 30, 2017, at https://www.sciencedaily.com/terms/mirror_test.htm.

[2] “UK’s Skynet military satellite launched”; BBC News; Dec. 29, 2012.

[3] “Should we be afraid of AI?” Aeon; May 9, 2016.

[4] “Why Deep Learning Is Suddenly Changing Your Life”; Fortune; Sept. 28, 2016.

[5] “Skype Translator.” Accessed Jan. 20, 2017, at https://www.skype.com/en/features/skype-translator/.

[6] “Musk: We need universal basic income because robots will take all the jobs”; Ars Technica UK; Nov. 7, 2016.

[7] “Our Fear of Artificial Intelligence”; MIT Technology Review; Feb. 11, 2015.

print
Next article If Past Trends Continue...
Previous article The Future of the ACA: Why any reforms to the health care law need to take key provisions into account

Related posts