From winning Jeopardy in 2011 to helping write a sad song last year, IBM’s Watson cognitive computing platform is all over popular culture. Press releases fly out about Watson producing a movie trailer, powering a Macy’s shopping app, even controlling lights on an internet-connected dress—along with more serious applications like working on cancer treatments. It seems, from IBM’s hype, that Watson can do everything.
But Bernie Meyerson, IBM’s chief innovation officer, wants to dial back the hype in some ways, calling Watson “just the first step on a very, very long road.” Watson can be helpful in a lot of industries, such as medicine, which are awash in data, but it can’t replace people, he says. And Meyerson is wary of the baggage that the term “artificial intelligence” brings with it—notions that we’re anywhere near computers that can think like humans. (IBM tends to favor terms like “cognitive computing” or “augmented intelligence.”)
“It’s not about the damn Turing Test,” Meyerson says in his tough Bronx accent. “I’m not trying to fool somebody [into thinking a computer is a real person]. Gimme a break. That’s never been the point.” Watson’s job, he says, is instead to work through the rising flood of data that modern technology is producing—far more than people can handle. “That is, potentially, a hard stop to human progress.”
A doctor reads about a half dozen medical research papers in a month, Meyerson says, whereas Watson can read a half million in about 15 seconds. From that, machine learning (one of the key types of artificial intelligence today) can suggest diagnoses and the most promising course of treatment. Watson was trained on cancer at Memorial Sloan Kettering in New York City—reviewing research, test results, even doctors’ and nurses’ notes to discover patterns in how the diseases develop and what treatments work best.
It’s doing some of the doctors’ jobs—taking on an inhuman amount of grunt work—but it’s not replacing them, Meyerson stresses. “Human brains bring passion to the work, they bring common sense,” he says. “By its definition, common sense is not a fact-based undertaking. It is a judgment call.”
He gives the example of a radiologist looking through dozens of MRI images of the brain. A tiny but deadly hemorrhage could be less than 4mm long. Machine vision (another type of AI) can zip through the images and circle any marks that look like a brain bleed for the doctor to examine. “You may have circled 20 places. Well that beats the hell out of looking through 150 images,” he says. “Your ability to see the thing can go from damn near zero to damn near 80% to 90%.”
Watson Health, as IBM’s offering is called, isn’t alone in providing machine learning, machine vision, natural language processing, and other AI to the medical field. And competitors also temper their claims. “Does [AI] really understand cancer? Not like a doctor does,” says Tim Estes, CEO of AI company Digital Reasoning. “But can it see signs of cancer based on how cancer is talked about? Absolutely.” His firm, which started with government counterterrorism clients, then financial sector clients, is now moving into health care.
MAKING WATSON ELEMENTARY
Digital Reasoning is just one among many AI companies that focus on an industry and maybe move cautiously into new ones. For instance, Clarifai has built a powerful image-recognition system used by companies such as Unilever, Trivago, and BuzzFeed. Textio specializes in natural language processing (NLP)—understanding nuance, such as bias, in how people write; it offers NLP specifically for job recruiting. Other companies focus their AI on the job interview process. Yet others focus on law, such as surfacing evidence or analyzing contracts. LexMachina (owned by LexisNexis) specializes in analyzing the track records of judges and attorneys that a lawyer may be dealing with; it covers only patent and antitrust litigation.
But it all started with a few games. The origins predate Watson, to a system called “Deep Blue” that in 1997 beat the reigning human chess champion Garry Kasparov. In 2011, IBM named its AI in honor of the company’s first CEO, Thomas J. Watson. Its first achievement was to beat Jeopardy‘s winningest contestant, Ken Jennings, in February 2011.
Rivals criticize IBM for pursuing publicity gimmicks. Big Blue calls them grand challenges to push its technology. “If we can solve these problems, we will have built something of high value to IBM’s clients and society in general,” Eric Brown, director of Watson Algorithms, said at the Watson Developers Conference this past November. Selecting among a vast number of possible chess moves and strategies improved Deep Blue’s search ability, he noted.
In 2004, IBM’s capacity to understand the nuances and multiple meanings of natural language (the way people really talk and write) was pretty poor. Tackling Jeopardy was a fun, visible way to improve that technology, Brown said. It took seven years.
In August 2011, Watson went from a project to a product. “We spent the fist two and a half years working on solutions, specifically in health care, that gave us enough experience to understand what was needed to build those solutions,” says Rob High, Watson’s CTO.
Watson has sprawled since then. At its World of Watson convention in October 2016, IBM introduced new AI services for marketing, order fulfillment, supply chain management, workplace collaboration, and human resources (including recruiting and hiring). It already had offerings for health, education, financial services, and “internet of things” management of connected gadgets and sensors. It’s now working on a smart assistant for cars through GM’s OnStar program.
Beyond industry-specific services, IBM also has a DIY offering, Watson Developer Cloud. “At the end of 2013 we were becoming quite aware that we couldn’t keep up with all the demand that was out there for the different solutions that everybody wanted,” says High. Like Apple with the iPhone and App Store, in late 2015 IBM opened up Watson APIs (application programming interfaces) for developers to plug their own programs into its cloud-based AI system.
Watson APIs include visual recognition to challenge a company like Clarifai, for instance. There are eight language tools, including ones that translate, analyze the sentiment of posts it reads (challenging sentiment analysis offerings like Adobe’s), assess the tone that comes off in text that clients write (Textio’s territory), and even try to understand the personality of users, such as customers interacting with a service bot.
“As we ramp Watson up, this huge developer community can operate on the platform,” says Myerson. “Because who says we have a lock on all the applications that can be necessary?” That drums up business from clients who build Watson apps for their companies and then pay for use of the service.
Then there are simpler tools that a non-techy person could use, like Watson Analytics. People can upload any data—say, from a spreadsheet. The online tool walks users through its analysis of patterns or anomalies, such as in sales figures, and correlates them with other info sources IBM provides, likeweather and location information. Users can ask Watson questions about the data and what Watson has found by typing or speaking in normal patterns, the way people really talk. Watson Analytics is IBM’s answer to other business intelligence software rivals, like Microsoft Power BI, Qlik, and Tableau.
The goal, it seems, is to put IBM at the center of artificial intelligence whenever possible. “We create a platform on which it sits, and it’s eminently adaptable because you can call up services from the platform,” says Myerson. “What we’re really enabling is an ecosystem which will not just be us. It will be us plus those globally that decide, ‘Yeah, I can use this as a tool.'”
There’s one kind of AI that IBM isn’t developing: the human-like artificial general intelligence (AGI) fantasized in movies like 2001, Her, and Ex Machina. There’s a good reason why IBM doesn’t work on AGI: It doesn’t, and may never, exist. “I tend to be—how should I put it?—focused more on execution,” says Meyerson, who took his PhD in solid state physics to IBM in 1980 to work on silicon chip development. “Currently we are so far from general intelligence.” He points to the Turing test as a red herring. “Even if I trick you into believing that behind the wall is a human being as opposed to a computer, what does that have to do with it being necessarily intelligent?”
There may be a simpler benefit to the Turing Test approach, though: An AI that seems “real” is easier to relate to. Making AI feel more accessible is important for IBM’s strategy to keep growing. At its Watson Developers Conference, IBM unveiled a new, experimental offering called Project Intu that aims to understand not just what people say, but their mood when they say it, even their personality. It already has technologies for this, like Watson’s language tools. Intu will gather data like the weather to judge how that may affect people’s mood.
Watson can also ape a tone to match the user. According to High, “We can, by combining all those things, synthesize an emotional state for the interface that when I’m interacting with it, I can hear through its tone either greater or lesser happiness, for example, that is a little more appropriate for the circumstances.” That could include changing rate and tone of voice, like being chipper, sedate, or apologetic. High promises that a customer service bot could even talk down an irate customer so they can focus on fixing the problem.
“I don’t want to go so far as to say we’ve synthesized the human mind’s ability to form emotional positions, because that’s not the point, right?” High says. “The point is to have an emotional state that can be used in a way to interact with me.”
In other words, Watson is faking it. There isn’t a human or anything else sentient behind the wall. IBM isn’t building an AI that could decide to take over the world. But it is striving to build a business that can take over the world of AI.