Competitive Intelligence: The real-world potential and limitations of artificial intelligence

The McKinsey Podcast

Artificial intelligence has the potential to create trillions of dollars of value across the economy—if business leaders work to understand what AI can and cannot do.

In this episode of the McKinsey Podcast, McKinsey Global Institute partner Michael Chui and MGI chairman and director James Manyika speak with McKinsey Publishing’s David Schwartz about the cutting edge of artificial intelligence.

Source: Podcast McKinsey Quarterly 

The McKinsey Podcast, click here. 

Podcast transcript

David Schwartz: Hello, and welcome to the McKinsey Podcast. I’m David Schwartz with McKinsey Publishing. Today, we’re going to be journeying to the frontiers of artificial intelligence. We’ll touch on what AI’s impact could be across multiple industries and functions. We’ll also explore limitations that, at least for now, stand in the way.

I’m joined by two McKinsey leaders who are at the point of the spear, Michael Chui, based in San Francisco and a partner with the McKinsey Global Institute, and James Manyika, the chairman of the McKinsey Global Institute and a senior partner in our San Francisco office. Michael and James, welcome.

James Manyika: Thanks for having us.

Michael Chui: Great to be here.

David Schwartz: Michael, where do we see the most potential from AI?

Michael Chui: The number-one thing that we know is just the widespread potential applicability. That said, we’re quite early in terms of the adoption of these technologies, so there’s a lot of runway to go. One of the other things that we’ve discovered is that one way to think about where the potential for AI is, is just follow the money.

If you’re a company where marketing and sales is what drives the value, that’s actually where AI can create the most value. If you’re a company where operational excellence matters the most to you, that’s where you can create the most value with AI. If you’re an insurance company, or if you’re a bank, then risk is really important to you, and that’s another place where AI can add value. It goes through everything from managing human capital and analyzing your people’s performance and recruitment, et cetera, all through the entire business system. We see the potential for trillions of dollars of value to be created annually across the entire economy [Exhibit 1].

AI has the potential to create value across sectors

David Schwartz: Well, it certainly sounds like there’s a lot of potential and a lot of value yet to be unleashed. James, can you come at it from the other direction? What are the big limitations of AI today? And what do these mean in practical terms for business leaders?

James Manyika: When we think about the limitations of AI, we have to keep in mind that this is still a very rapidly evolving set of techniques and technologies, so the science itself and the techniques themselves are still going through development.

When you think about the limitations, I would think of them in several ways. There are limitations that are purely technical. Questions like, can we actually explain what the algorithm is doing? Can we interpret why it’s making the choices and the outcomes and predictions that it’s making? Then you’ve also got a set of practical limitations. Questions like, is the data actually available? Is it labeled? We’ll get into that in a little bit.

But I’d also add a third limitation. These are limitations that you might call limitations in use. These are what lead you to questions around, how transparent are the algorithms? Is there any bias in the data? Is there any bias in the way the data was collected?

David Schwartz: Michael, let’s drill down on a first key limitation, data labeling. Can you describe the challenge and some possible ways forward?

Michael Chui: One of the things that’s a little bit new about the current generations of AI is what we call machine learning—in the sense that we’re not just programming computers, but we’re training them; we’re teaching them.

The way we train them is to give them this labeled data. If you’re trying to teach a computer to recognize an object within an image, or if you’re trying to teach your computer to recognize an anomaly within a data stream that says a piece of machinery is about to break down, the way you do that is to have a bunch of labeled data and say, “Look, in these types of images, the object is present. In these types of images, the object’s not present. In these types of data streams, the machine’s about to break, and in these types of data streams, the machine’s not about to break.”

A minute with the McKinsey Global Institute: What AI can and can’t (yet) do
There have been many exciting breakthroughs in AI recently—but significant challenges remain. Partner Michael Chui explains five limitations to AI that must be overcome.

We have this idea that machines will train themselves. Actually, we’ve generated a huge amount of work for people to do. Take, for example, self-driving cars. These self-driving cars have cameras on them, and one of the things that they’re trying to do is collect a bunch of data by driving around.

It turns out, there is an army of people who are taking the video inputs from this data and then just tracing out where the other cars are—where the lane markers are as well. So, the funny thing is, we talk about these AI systems automating what people do. In fact, it’s generating a whole bunch of manual labor for people to do.

James Manyika: I know this large public museum where they get students to literally label pieces of art—that’s a cat, that’s a dog, that’s a tree, that’s a shadow. They just label these different pieces of art so that algorithms can then better understand them and be able to make predictions.

In older versions of this, people were identifying cats and dogs. There have been teams, for example, in the UK that were going to identify different breeds of dogs for the purposes of labeling data images for dogs so that when algorithms use that data, they know what it is. The same thing is happening in a lot of medical applications, where people have been labeling different kinds of tumors, for example, so that when machines read those images, they can better understand what’s a tumor and what kind of tumor is it. But it has taken people to label those different tumors for that to then be useful for the machines.

Michael Chui: A medical diagnosis is the perfect example. So, for this idea of having a system that looks at X-rays and decides whether or not people have pneumonia, you need the data to tell whether or not this X-ray was associated with somebody who had pneumonia or didn’t have pneumonia. Collecting that data is an incredibly important thing, but labeling it is absolutely necessary.

David Schwartz: Let’s talk about ways to possibly solve it. I know that there are two techniques in supervised learning that we’re hearing a lot about. One is reinforcement learning, and the other is GANs [generative adversarial networks]. Could you speak about those?

Michael Chui: A number of these techniques are meant to basically create more examples that allow you to teach the machine, or have it learn.

Reinforcement learning has been used to train robots, in the sense that if the robot does the behavior that you want it to, you reward the robot for doing it. If it does a behavior you don’t want it to do, you give it negative reinforcement. In that case, what you have is a function that says whether you did something good or bad. Rather than having a huge set of labeled data, you just have a function that says you did good or you did the wrong thing. That’s one way to get around label data—by having a function that tells you whether you did the right thing.

With GANs, which stands for generative adversarial networks, you basically have two networks, one that’s trying to generate the right thing; the other one is trying to discriminate whether you’re generating the right thing. Again, it’s another way to get around one potential limitation of having huge amounts of label data in the sense that you have two systems that are competing against each other in an adversarial way. It’s been used for doing all kinds of things. The generative—the “G” part of it—is what’s remarkable. You can generate art in the style of another artist. You can generate architecture in the style of other things that you’ve observed. You can generate designs that look like other things that you might have observed before.

James Manyika: The one thing I would add about GANs is that, in many respects, they’re a form of semisupervised learning techniques in the sense that they typically start with some initial labeling but then, in a generative way, build on it—in this adversarial, kind of a contest way.


Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )


Connecting to %s