By Jim Lynch
- The Age of AI and Our Human Future
- Power and Prediction: The Disruptive Economics of Artificial Intelligence
- Working With AI: Real Stories of Human-Machine Collaboration
I don’t know much about artificial intelligence, but, geez, what does Henry Kissinger know?
Why ask? Last year the Machiavellian-cum-Methuselah (via the Theranos board of directors) co-wrote The Age of AI and Our Human Future.
Lots of books try to describe how AI will change the world. I read three. I posed as a busy executive who knows AI is a big deal but doesn’t want to be perplexed by technological razzle-dazzle.
Now, it’s unrealistic to expect these books to predict the future. Steve Jobs didn’t introduce the iPhone by calling an Uber. No, he talked about an iPod + a cell phone + a browser that was “smarter and easier to use.” Your finger would replace the stylus! You could scroll the internet while listening to Red Hot Chili Peppers!
If Steve didn’t know what he was holding in 2007, it’s unfair to expect more of Richard Nixon’s secretary of state.
Kissinger did have help: Eric Schmidt, ex-CEO of Google, and Daniel Huttenlocher, dean of MIT’s Schwarzman College of Computing.
They are more concerned about geopolitical issues: how AI will affect foreign relations and wars and some such.
The analysis can be banal. Like, I already knew that weapon build-ups in the early 20th century led to a senseless world war. Prescriptions often arrive via the House of Duh: “We will need to achieve a balance of power that accounts for the intangibles of cyber conflicts and mass-scale disinformation as well as the distinctive qualities of AI-facilitated war.”
In Power and Prediction: The Disruptive Economics of Artificial Intelligence, three University of Toronto professors make some interesting points.
First, AI is in its early stages, and like any emerging technology, it seems overrated and needs a few decades to catch on. They invoke the coming of electricity, but I think airplanes make a more vivid metaphor. After all, the Wright Brothers’ first flight went 120 feet, with no passengers or cargo. The 747 came later.
Second, they assert AI splits prediction and judgment. Netflix AI predicts what movie you will like, but you decide what to watch. Splitting prediction and judgment creates a power struggle and a challenge to progress, they argue. I think that oversimplifies. Most organizations already separate prediction and judgment. Actuaries predict what rates are appropriate. The final rate is a judgment of underwriters, actuaries, marketing, claims, executives, and (ultimately) regulators.
Businesses need to look at AI innovations holistically, they argue, as the innovations in one area can create problems elsewhere. The holistic process they set out sounds like a standard blue-sky session, and I don’t recall the authors saying how it is not.
They construct an example—homeowners insurance. They suggest an insurer consider how marketing, underwriting, and claims could be reconfigured by a bot that mitigated risk. It could change the value proposition of the industry, they argue.
I don’t think this reflects reality. You don’t need AI to do this. Insurers have encouraged risk management for decades. It’s a college major, for heaven’s sake.
The challenge is not organizational but financial. Most safety features cost more than the expected losses they prevent. A cheap AI would help, but then you wouldn’t have to overhaul operations to implement your changes.
The authors of Working With AI: Real Stories of Human-Machine Collaboration, argue that these days AI enhances human workers, sort of the way the exosuit let Sigourney Weaver beat on the stowaway in Aliens.
They’ve done impressive field work: 29 case studies showing how people and bots work together. Each concludes with three or four “Lessons We Learned.”
The studies are short, but the authors’ clinical, sonorous style makes them blur after reading two or three. Perhaps they recognize this, so they encourage you to “hop around … in whatever order best suits your needs.” So I did.
First I read a half-dozen that interested me, including one about a life insurance bot that analyzed underwriting and third-party data for MassMutual. Sample lesson: “Digitized work processes provide complete visibility of process and individual employee performance.” (Note: The jargon could be stultifying.)
Next, I reviewed the lessons from all 29 studies. If it seemed interesting, I read the study. Thus I learned about a bank’s anti-fraud program: “An automated system that generates large numbers of … false positives does not save human labor.”
Late chapters capture larger insights. One, “What Machines Can’t Do (Yet),” was particularly valuable, noting that AI cannot grasp how its work blends into the organization. The bot can do the work but doesn’t know what to do with it or why it should be done at all.
The book captures what I think an executive wants: Stories to share at the next conference watering hole and insights about what does and could work with the latest gizmo at the workplace.
JIM LYNCH, MAAA, FCAS, is a freelance writer.
References Ajay Agrawal, Joshua Gans, and Avi Goldfarb are respectively: professor of strategic management and Geoffrey Taber Chair in Entrepreneurship and Innovation; Jeffrey S. Skoll Chair of Technical Innovation and Entrepreneurship and professor of strategic management and Rotman chair in AI and Healthcare and professor of marketing.  Thomas H. Davenport and Steven M. Miller. Davenport is a professor at Babson College and fellow of the MIT Initiative on the digital economy. Miller is professor emeritus of information systems at Singapore Management University.