Rethink Artificial Intelligence – AI Objectives

When customers feel like no one is listening, they may be right. Or, they may be unknowingly talking to a machine. Last week, Google announced Duplex, an artificial intelligence (AI) assistant that can handle customer service requests, such as booking an appointment or providing basic information. We’ve had automated phone attendants for years, but the new buzz is that customers can’t tell that this automated attendant isn’t human.

The Duplex announcement quickly brought questions about transparency. Should companies include a notice that “objects on the phone are less human than they appear?”

Of course they should (and Duplex will). Investing massively in technology to be intentionally duplicitous seems like it belongs in the junkyard of bad ideas we shouldn’t even consider, such as earthquake generators or encryption back doors. What would be the benefits of being dishonest with customers? Is the “gee whiz” worth the potential backlash?

Just because we can do something doesn’t mean we should. Consider the benefits of being up-front about interactions with bots:

  • Customer satisfaction can increase. For example, if customers know they are interacting with a computer and not using another person’s time, they may spend more time customizing their selection. I’ve certainly spent far more time online investigating travel options than I would ever subject a human travel agent to. With better customization, customer satisfaction can increase.
  • Customers can be better informed. When interacting with other humans, people hesitate to ask what they perceive to be dumb or repetitive questions. We can be reluctant to admit that we still don’t understand. But when interacting with machines, we are free to clarify to our heart’s content. As the use of technology removes the concern about long call times, we don’t have to worry about using a machine’s time.
  • Customer interaction can be faster. Social norms dictate that we first lubricate the conversation with bromides and banalities about weather and pleasantries. I deeply suspect that most customer service people actually don’t care if “I’m doing OK for a Monday.” Knowing that we are interacting with a machine, we can get straight to the point, saving everyone time.
  • Customers may be more honest. We care what other people think about us. As a result, we may not be completely honest when what we think differs from what we believe the other person wants to hear — the social desirability response bias. We are less likely to worry about what a machine thinks about us than what another human thinks.
  • So why are companies even considering the duplicitous option? Like dogs who don’t know what to do once they’ve caught the car they’ve been barking at and chasing, we’ve chased the Turing test (which tests whether a machine can mimic a human’s intelligent behavior) and are not sure what to do now that we’ve caught it.

    For science, passing the Turing test makes sense as a goal. For business, other goals most likely make a lot more sense. Even if computers can pass for humans, human mimicry may be a shortsighted goal. We already have humans. We don’t need to imbue these bots with the “hmms,” “ums,” and “uhs” that are part of normal imperfect speech patterns. Duplicating misses an opportunity for more.

Read more, here

ABOUT THE AUTHOR

Sam Ransbotham is an associate professor of information systems at the Carroll School of Management at Boston College and the MIT Sloan Management Review guest editor for the Artificial Intelligence and Business Strategy Big Idea Initiative. He can be reached at sam.ransbotham@bc.edu and on Twitter @ransbotham.

Article published at the MIT Sloan Management Review, May 21, 2018.

Advertisements

What are the best sources to learn more about artificial intelligence?

This TOPBOTS article goes through all the major free resources for learning AI and how they are different. Highlights listed here.

1) ANDREW NG’S MACHINE LEARNING AT STANFORD UNIVERSITY (ONLINE COURSE) Introduction to Machine Learning

2) SEBASTIAN THRUN’S INTRODUCTION TO MACHINE LEARNING (ONLINE COURSE) Introduction To Machine Learning

Also offered on Udacity is Thrun’s “Introduction to Artificial Intelligence.”

3) GEOFFREY HINTON’S NEURAL NETWORKS FOR MACHINE LEARNING (ONLINE COURSE)

Neural Networks For Machine Learning on Coursera

4) JEREMY HOWARD’S fast.ai · Making neural nets uncool again & DATA INSTITUTE CERTIFICATES (ONLINE & IN-PERSON COURSES)

Deep Learning Part One covers the basics of Deep Learning, while Part Twocovers advanced applications.

5) DEEP LEARNING BY YOSHUA BENGIO & IAN GOODFELLOW (BOOK)

Their book, Deep Learning, published by MIT Press, is freely available online.

6) NEURAL NETWORKS & DEEP LEARNING BY MICHAEL NIELSEN (BOOK)

Neural Networks & Deep Learning”. The code for the course exercises are written in Python 2.7 and relatively easy to understand even if you don’t normally use the language.

7) DEEP LEARNING WITH TENSORFLOW (ONLINE COURSE)

Udacity’s Deep Learning by Google online course is taught by Vincent Vanhoucke, a Principal Scientist at Google, and technical lead in the Google Brain team.

TensorFlow’s website also offers beginner and advanced tutorials and strong community support.

8) NIPS CONFERENCE VIDEO ARCHIVE (VIDEO)

If you’ve missed the conference in the past or simply can’t make the event in person, check out the NIPS video archives from 2015 and 2016.

9) SCIENTIFIC PAPERS

New papers are published every day in the artificial intelligence and deep learning space. Google ScholarArXiv, and Research Gate are great repositories to start with, but many more collections exist.

If you’re wondering which papers to start with, here is a starting list of foundational research papers to read.

Thanks Mariya Yao at TOPBOTS for researching and compiling!

Source: Adelyn Zhou, Bots, AI, and IoT | Entrepreneur, Marketer & World Traveler

 

The Most Important Skill in The Age of Artificial Intelligence (AI)

Shutterstock

No doubt each of us has meet Artificial Intelligence (AI) — whether it is by shopping online and seeing ‘suggested for you’ products, or ads popping up in your Facebook feed, or at the bank when we are making a check deposit at the ATM. Other industries, ranging from health to fitness to media to dating apps to finance have all adopted AI in some capacity to optimize and automate their processes. Although known in the academic world since the 1990s, AI is only coming to mainstream utilization in recent years. So, what exactly is the power of AI and why is it becoming so popular now?

AI is what is known as a forward model in computer science, meaning it is a computer model that makes decisions based on the input into the model, such as data that can be in the form of pictures, numbers, and really anything that is mathematically quantifiable. Thus, this type of model can modify its prediction based on the dynamic flow of input. So, the longer that the model receives input, the ‘smarter’ it becomes and the better guesses it can make about future behavior.

It is different from the backward model, or non-AI predictive computer algorithms, in that in a backward model a prediction is made on a past data set and the unique parameters that are chosen at the time of the model building. Therefore, it does not have the capacity to modify its guesses or predictions because no matter how many times the backward model is fed data, it will always give the same answer for the same data point, this is a crucial difference to the forward model, where the answer changes with more data that is fed into the model.

The reason why AI is being prevalently used now than before is twofold: 1) the science and the advancement behind algorithm development; 2) with the abundance of data generated makes it a perfect time to use this model. One of the key principles of algorithm design for AI is that there must be enough ‘training’ data on which to ‘train’ the model before it is able to make meaningful predictions. This number can range from 10,000 data points and above. The abundance of data did not exist in the same capacity as it does now, and with the prevalence of high-speed computing, this is a perfect time utilize this model.

Now, that we understand what AI is and why it is being used now, let’s examine what AI can and cannot do. AI can replicate and automate decisions based on certain parameters that are fed into the program, but no matter what AI cannot be creative. It is only a program that can quickly and efficiently match and sort data, it is unable to make creative decisions about things, ideas or places.

Although it can replicate behavior, it does not have imagination, because it is just a sorting algorithm with advanced functional optimization and regression techniques. The human mind, on the other hand, is rapidly creative and imaginative. Humans can think of stories and imagine things which we then bring to life. As Walt Disney said, “if you can dream it, you can do it.” AI cannot do that, rather what it can do is repetitive tasks at an extremely efficient rate. So, in the age of AI human creativity will have more premium than ever before.

The key takeaway is that more and more industries will be leveraging the power of AI, which may lead to a reduced workforce in the areas of the workforce that require repetitive tasks. Instead, the world will need more creativity in all endeavors, even regarding implementing and leveraging AI technology creatively.

Source/Author: Dr. Anna Powers is an entrepreneur, advisor and an award-winning scientist. Her passion is sharing the beauty of science and encouraging women to enter STEM fields.

WOMEN@FORBES, DEC 31, 2017 @ 08:06 PM 

How AI Is Making Prediction Cheaper

Avi Goldfarb, a professor at the University of Toronto’s Rotman School of Management, explains the economics of machine learning, a branch of artificial intelligence that makes predictions. He says as prediction gets cheaper and better, machines are going to be doing more of it. That means businesses — and individual workers — need to figure out how to take advantage of the technology to stay competitive. Goldfarb is the coauthor of the book Prediction Machines: The Simple Economics of Artificial Intelligence.

CURT NICKISCH: Welcome to the HBR IdeaCast, from Harvard Business Review. I’m Curt Nickisch, in for Sarah Green Carmichael.

YOUTUBE: [Two women speaking] We’ve got this all tabbed up? Yup, it’s all tabbed up. OK, dialing. [Phone rings]

CURT NICKISCH: There’s a YouTube video with millions of views. In it, three young English-speaking women use Google Translate to order food in Hindi from an Indian restaurant. They copy and paste their order in English into the computer, and it translates items like “samosas” and reads them aloud in the foreign language.

At one point, they give their address for the delivery. And the worker asks – in Hindi — if they want anything else. They don’t know what he’s saying, so they give their address again.

YOUTUBE: [Man speaking over the phone in Hindi]

CURT NICKISCH: Despite the temporary miscommunication, when the order shows up, it’s correct.

YOUTUBE: [Women speaking] We got two basmati rice. Three samosas. Two fish curry! Here we go!

CURT NICKISCH: What’s remarkable about this video is that it is eight years old. Since then, Google Translate has gone from translating word by word to processing more at the sentence level. Pretty soon, you won’t have to copy and paste into a search box anymore.

AVI GOLDFARB: And you’ll be able to put something in your ear or have something on your phone and get instant translations for whatever language anywhere in the world and understand what people are talking about.

CURT NICKISCH: That’s our guest today, Avi Goldfarb. For him, one of the most mind-blowing uses of artificial intelligence technology is machine translation.

AVI GOLDFARB: It reads like language, and that change has helped me recognize that this is something that’s possible. It can really happen.

CURT NICKISCH: Goldfarb is a professor of marketing at the University of Toronto’s Rotman School of Management. And he’s the coauthor of the book Prediction Machines: The Simple Economics of Artificial Intelligence.

Avi, thanks so much for talking with the HBR IdeaCast.

AVI GOLDFARB: Thanks for having me here.

CURT NICKISCH: So, besides translation, what’s just another big example of how you think machine learning is setting us up for big changes in how we do business?

AVI GOLDFARB: Another thing that I think —  a technology that I’ve seen a few times — hasn’t, still not perfected but might get there soon, is this idea that you can see what something will look like on you online.

CURT NICKISCH: Right.

AVI GOLDFARB: So, Amazon has been trying that; a few people have been trying that. It’s still not close to what the experience is in a real store, a physical store, but the technology is getting better and a lot better very quickly. So, when I — I remember I worked with a startup a couple years ago that was trying to do something along these lines, and it was not close, and they didn’t get a lot of traction, for obvious reasons. Now I actually see Amazon and others trying to commercialize, recognizing that it’s imperfect, and with that trajectory, I can see in a few years we’re going to have something very different where we can really do and understand what things are gonna look like on us without having to go to a store.

CURT NICKISCH: Because Amazon and those other places are learning a lot right now.

AVI GOLDFARB: Because they’re learning a lot, and they’re investing in the technology, and the technology is getting better so that it can predict what something would look like on a human that has never been on before.

CURT NICKISCH: We’ve just talked about a few examples of just amazing ways that technology can change business and also how we consume and live our lives, and it’s just that drop in the bucket of a lot of the stuff that’s going on there. But what I found so fascinating about Prediction Machines, the book that you coauthored, is that you take what is kind of this very-hard-to-predict, sweeping trend, that it’s impossible to know exactly how it’s going to unfold, and you turn it into a basically a very simple economic model. And I’m a sucker for just like the economic argument. What is the simple economy of artificial intelligence so that we can understand it that way?

AVI GOLDFARB: Sure. So, artificial intelligence and the idea of artificial intelligence has been around for decades, and we’ve had fits and starts over the years. Starting in the fifties, if not earlier, you were talking about, oh, computers are going to actually learn to think. And it’s always been a bit of an unfair race in the sense that as soon as the computer can do something that a human can do, we no longer call it artificial intelligence. And so, there’s been the sense that artificial intelligence is what our computer can’t yet do, and once a computer can do that, then it’s, then that’s just computing and the remaining stuff is intelligence.

What’s happened in the last 10 years and especially in the last six years, is something a little bit different, which is that a particular branch of artificial intelligence called machine learning has improved a lot, to the point where a lot of things that just 10 years ago we thought of as inherently human problems can now be done by machines.

And so, in understanding why there’s been this excitement around AI in the last 10 years, if not the last two years, it’s all driven by machine learning, which is prediction technology. And so, you should think about prediction as the process of filling in missing information. So, what machines have gotten very good at is filling in missing information, is prediction. And so, I’m an economist, and so, how do I see the world? I think about, well, something’s gotten easier. We can think about that as a drop in costs. If you want to think about it as it’s gotten better, better quality, that gets you to the same place, but something for given quality is now cheaper. And as an economist, I know what happens when something gets cheaper. We all know what happens when something gets cheaper: we want to buy it more. So, as prediction gets cheaper, as prediction gets better, we’re going to do more and more and more prediction, and that prediction is gonna be done by machines.

CURT NICKISCH: Got it. And prediction, even for humans, is hard, right?

AVI GOLDFARB: It is. So, if it’s the kind of prediction that we know humans do badly, machines don’t make those kinds of mistakes, and there’s all sorts of aspects of that. So, another thing that we’ve seen is in the process of hiring. So, we know humans are pretty bad at predicting which applicants for a job are going to perform best. We make two kinds of mistakes. One kind of mistake is we’re just wrong. We think somebody’s gonna do really well, and they do really badly. We think someone else is gonna do really badly, and they end up doing really well. The other kind of mistake we make is related to our biases. We have stereotypes and people come in, we impose those stereotypes on the applicants and then assume that they’re going to be true. And we ended up not hiring people we should and hiring people we shouldn’t because of these biases.

Machines can improve on those human mistakes in both directions. So, first, the machines are just going to make fewer basic mistakes as long as they have a measure of what performance means. So, we’ve seen evidence of this in call centers. So, in call centers, the key thing that marks a good hire is somebody who can last a long time. It’s very expensive to train people up, and machines are very good at predicting tenure in the job much better than humans are. We’ve seen that from a couple of research projects, but not only that; the machines do that without some of the biases that humans have with respect to gender and race and things like that. That’s not to say that machines won’t also be biased in various ways, but it’s exactly because they’re programmed by people. So, the expectation is that they should be at least no worse than the humans who program them in terms of these issues.

CURT NICKISCH: So, let’s continue this economic model here. Prediction gets cheaper. What else is happening in this model?

AVI GOLDFARB: The way I like to think about the economics of the book is the first point is prediction is cheaper, right, and that means we’re gonna do more prediction, OK. Just like when the price of coffee falls, we buy more coffee —  nothing new there. The other thing to remember is, when the price coffee falls, we buy less tea, and so that’s the substitute story. That’s what, what are you going to have less of an organization? You’re going to have less people who do prediction. The other thing that happens, and I think the most important thing, is when the price of coffee falls, we buy more cream and sugar. And so, the core of the book is these are the compliments. What are the cream and sugar for prediction? If predictions cheaper, what becomes more valuable? What do we end up using more of and buying more of?

In the book we emphasize that data becomes more valuable. That being able to take an action — decisions aren’t useful without being able to do something about it. And perhaps most importantly, what we humans have is judgment, the ability to figure out which predictions to make and what to do with the prediction once we have it. And so, when you think about the opportunities around machine prediction, a lot of them are driven by the availability of data, the ability to take an action based on a prediction, and whether the people in charge, the people supervising the machine, can use it well enough to take advantage and help the organization.

CURT NICKISCH: So, let’s break each of those down, starting with data. What do companies and people need to know about it?

AVI GOLDFARB: Sure. So, lots of people think that if their company’s sitting on a lot of data, then data’s the new oil, and so their company’s sitting on some kind of great mine where they can make lots of money off it for years and years and years. The data-is-oil metaphor I think is really good, but it’s good because just like oil, once you’ve used your data, the data you’re sitting on, it’s used, and you’re going to extract value from it, but you’re going to extract value from it once. Say you’re a warehouse, and you’re sitting on data on past inventories and your warehouse, OK? That’s going to help you predict future inventories, and you’ll now have a better model if future inventories, and you can make money on it. But it’s only valuable for that one model, essentially. You might think of something a little bit different, but fundamentally you’ve extracted the core value from that data you’re sitting on, and it’s done. In order then to continually improve your models of inventory, you need new data on what’s coming in in order to make it better.

The key source of sustained competitive advantage from data is through we call feedback data, which is the ability to continually improve your AI. And that means you need to invest in learning. What does investing in learning mean? It means making your product potentially worse in order to improve the AI.

So, think about using Waze — it’s for driving directions — and if they learn that there’s an accident on the road somewhere and there’s a backup, the app, the software, the AI will take you a different route. And it’ll take most people a different route, but at some point Waze has to figure out if that backup, that accident has cleared, and so it has to invest and essentially sacrifice some drivers or some technology in order to figure out whether that backup has cleared. And someone has to invest in improving the AI sometimes even at the expense of the immediate customer experience.

CURT NICKISCH: Wow, that’s interesting. I had never thought of it that way. So, they may send somebody down that road and see if they get through or not.

AVI GOLDFARB: Correct. And so —

CURT NICKISCH: And that person is just like, oh, what terrible luck; an accident just happened.

AVI GOLDFARB: Or they might think Waze isn’t as good as it seemed like it was.

CURT NICKISCH: Right. OK. What about action?

AVI GOLDFARB: OK. The prediction is really only valuable if it leads to a change in behavior. So, a prediction on how much yogurt somebody’s going to sell, a prediction on a which stocks are going to go up or down — all of these things are only valuable if you can then go out and fill up your inventory, change the prices, by the stocks, these sorts of things. You know, if you’re sitting on a lot of data, we think, you’re a big incumbent company, and you might have some advantage.

But if you own the action, if you, if the customer relationship is with you and not with whatever startup is coming up with a better prediction on something you do, you have an advantage. That’s the takeaway from understanding the benefit of action.

CURT NICKISCH: OK. And now maybe the funnest one is judgment.

AVI GOLDFARB: Yeah. OK.

CURT NICKISCH: Which makes us feel good about being human again.

AVI GOLDFARB: Absolutely.

CURT NICKISCH: Yeah.

AVI GOLDFARB: So —

CURT NICKISCH Break that one down.

AVI GOLDFARB: OK, so judgment is the ability to figure out which predictions to make and what to do with the prediction once you have it. You can have a machine that gives you the best predictions in the world, but if you don’t know what kinds of predictions are gonna be valuable, or if you can’t tell the machine something of use to your organization or to you individually, then who cares? And so, what judgment is all about is knowing what you care about, what your organization cares about, and how to tell the machine to do that.

The simple example we like to use is thinking about the context of whether it’s going to rain. OK? So, you have a weather prediction about how likely it is to rain or not. And depending on whether it’s likely to rain or not, you may or may not carry an umbrella. OK? But that choice is going to depend on how much you hate getting wet and how much you hate carrying around an umbrella. And so, the prediction is rain, no rain, but the judgment is how costly that umbrella is to you to carry around versus how big a deal it will be to get wet. Now, an umbrella doesn’t sound like a very big deal, like a sort of a pedestrian decision. But that example is exactly the same as an insurance problem. So, if you think about, should you take out insurance, it is exactly the same situation: How costly is that insurance going to be is like, should you carry an umbrella? How much better off you are you if you’re protected by your insurance, you know, if something goes wrong is exactly like, well how important is it to have that umbrella with you if it starts to rain? And so, a whole bunch of decisions look like a lot like the umbrella decision. So first, you know, all the classic insurance problems, but also how much did you invest in security? It’s all about weighing risk, and these are all judgment problems. So, you can predict how risky something is going to be, but that’s only going to be useful if you know what to do with those predictions, and what better prediction allows you is to give final judgment.

CURT NICKISCH: The other message here is just that if judgment is a compliment of prediction prices going down, that’s a place where you can focus your career, aim your company, and just bring those strengths to bear.

AVI GOLDFARB: Absolutely. So, a key, a key message of the book, but also I think just the reality of the technology is if your job is prediction, then you’re going to have to rethink some of your skills and invest in learning new ones. That sounds pretty dour, and pretty bad, but it’s not in the sense that often the people who are good at prediction are going to be exactly the people who know what to do with those predictions. OK?

And to get a sense of that, I like to think about what happened with accounting as accounting move to spreadsheets. OK? So, accountants used to spend their time doing arithmetic. Accountants don’t do arithmetic anymore. Maybe they do a little tiny bit, but fundamentally that’s not part of the job, but we still have a lot of accountants. Because it turned out the people who were really good at arithmetic or exactly the kind of people who would know how to use arithmetic well to help companies deal with the tax code, manage changes, identify where profit centers can be. So that that skill set, which was arithmetic skill set, which is no longer useful, turned out to be a great baseline for learning what to do with arithmetic. And I think that analogy is going to work in lots of high skilled jobs around prediction.

CURT NICKISCH: If you’re in a company, and you’re just trying to figure out how to — how to employ people to make the best use of this technology within your firm, what’s a good way of thinking about that?

AVI GOLDFARB: What we need to think through is the workflow of a particular job in a particular process in your organization? So, increasingly driving is done by prediction. We’re not quite at autonomous driving yet, but there was an insight a while ago that we can teach machines to drive by telling them to predict what a good human driver would do. And so, over time, this is one of these prediction tasks that wasn’t an obvious prediction task 30 years ago that now we see, oh, we can do driving as prediction. And so, what’s happened is we need to think through the workflow of various people whose job is driving. And I think it’s useful to compare a, a bus driver to a school bus driver, like a long-distance bus driver to a school bus driver.

And if you look at the workflow of a long-distance bus driver, almost the entire workflow is driving. And that means that that job is in many ways likely to become automated over time. It’s not obvious what the need is for a human on the bus if what they’re doing is driving somebody from place to place or driving you from place to place. That’s different from a school bus driver, because the school bus driver, if you look at their workflow, they do a few things. I like to summarize two as one is drive, but the other is protect the children maybe from outsiders but particularly from each other. And so, even if over time the job of a school bus driver changes dramatically in the sense that they spend very little time driving, you’re still going to need some role for somebody to figure out how to do the protection side of that job.

Now, how that exactly plays out, I don’t know. One possibility is you end up with teachers on the bus, and you take advantage of that time as learning time. Another possibility is that it happens, one person remotely monitor several buses, and they have some system for protecting things, but you still need some, at least given our understanding the technology today, you’re going to need someone to do that  protection job even as driving becomes automated.

When you break down the workflow, you can see some things in some jobs are going to become completely automated, but lots of jobs, yes, certain tasks within the job are going to become automated, but there’s lots of other things that they do that our prediction tasks that a human needs to do. And potentially over time if it becomes cheap enough and easy enough, you can see even more jobs and more interesting jobs and better use of time because the mundane prediction part of the task is gone.

CURT NICKISCH: And that’s a really simple example, but you’re saying that you could do that with any job —

AVI GOLDFARB: You can do that —

CURT NICKISCH: Or any business process, business unit even.

AVI GOLDFARB: Absolutely. So, Goldman Sachs has broke down the steps to an IPO into over 100 small tasks. And when you look at each of those small tasks, you can identify which aspects of them are prediction, which aspects of them might be automated through some other technology, and which aspects require a human judgment or input of more data or an action done by a human or an action done by a machine. And so, once you break down the workflow into specific pieces, you can identify the opportunities to insert prediction machines as a tool for a particular task.

CURT NICKISCH: Especially at high-cost, high-value workflow like that one.

AVI GOLDFARB: Especially a high-value workflow like that one, absolutely. And the immediate results you might think is productivity enhancement in the sense of some people who used to have jobs won’t be doing those anymore and it will be lower cost with machines, but at the same time, other aspects of the workflow — once the bottleneck potentially becomes cheaper, we can use other aspects of the workflow more and get more value out of the humans there and even create more jobs in that part.

CURT NICKISCH: I mean, just going through that task example makes you think a lot of tradeoffs, and with tradeoffs we think of strategy. From a business unit, from a corporate strategic point of view, how do you apply this simple economics model to, to what you’re doing?

AVI GOLDFARB: The way to think about it is this, these are prediction machines and so what does prediction do? It reduces uncertainty. And so, if there’s aspects of your business that you can’t do everything you’d want to do because of uncertainty, because you have to hedge against the risk of uncertainty, then prediction machine could be transformative.

Since there’s so much uncertainty, but what an end consumer might want from a retailer, from Amazon, that Amazon has to wait for you to tell them what you want before it can ship to you. But if that uncertainty is resolved, and they know what you want, now, Amazon’s business model can change.

And so, the business model moves from a shopping-then-shipping model to a very different model, which is, they send me things I want, and they’re almost always right; and if I don’t happen to want them, then I can in principle send those very few things back.

We can do a similar example. We’ve talked a little bit about machines being better and better at, at hiring and HR. We can take that to the extreme similarly, which is that the reason we have to go through this onerous hiring process of posting a job and waiting for applications and screening through applications and interviewing and hiring is because there’s uncertainty at every stage on which of those — who’s gonna be interested in applying. If they apply, who’s going to be above some threshold and we to interview to see if they’re going to fit with culture and interest and we’re going to get along and all these things. But if we have a great prediction machine, all that uncertainty could go away to the point where we could look at the database of people who might be interested in the job, who exist in the profession —

CURT NICKISCH: And make it —

AVI GOLDFARB: And make an offer right away and skip all those intermediate steps and totally transform the industry because we’ve resolved that core uncertainty.

CURT NICKISCH: Avi, this has been great. Thank you so much for coming in and talking.

AVI GOLDFARB: Thank you very much.

CURT NICKISCH: That’s Avi Goldfarb. He’s a professor of marketing at the University of Toronto’s Rotman School of Management.

He’s also the coauthor of the book Prediction Machines: The Simple Economics of Artificial Intelligence. You can find it at HBR.org.

This episode was produced by Amanda Kersey. Adam Buchholz is our audio product manager. And we get technical and production help from Rob Eckhardt.

Thanks for listening to the HBR IdeaCast. I’m Curt Nickisch.

Source: HBR IdeaCast, MAY 22, 2018

Competitive Technology Intelligence (CTI): The Future of Artificial Intelligence: Is Your Job Under Threat?

Image credit: SRI International

Read this article to learn more about the future of AI and whether you should be worried about losing your job anytime soon.

Since the dawn of machinery and the first flickering of computer technology, humanity has been obsessed with the idea of artificial intelligence — the concept that machines could one day interact, respond, and think for themselves as if they were truly alive.

Every year, the possibility of an “intelligent technology” future becomes more and more of a reality — as algorithms and machine learning improve at a lightning-fast rate. According to experts across the globe, machines will soon be capable of replacing a variety of jobs — from writing bestsellers to composing Top 40 pop songs and even performing your open-heart surgery!

However, the biggest questions remain: how long until that point, and how did we get to where we are now?

The Origins of AI

When attempting to chart the future, it’s always essential to know the past. While the idea of “artificial intelligence” had been speculated about in fiction for centuries — as far back as Mary Shelley’s Frankenstein or Karel Čapek’s R.U.R. (Rossum’s Universal Robots) — it was not until Alan Turing in 1950 that the concept of AI first became more than a fantasy.

Most famous as the man behind the Enigma code-breaking machine during the Second World War, the English computer scientist and mathematician spent his time post-war devising the Turing Test. Basic but effective in nature, the test involves seeing if artificial intelligence can hold a realistic conversation with a human being, thereby convincing them they are also human.

Forming the background to AI measurements ever since its introduction in Turing’s paper, it was only in 2014 that a Russian-designed chatbot programmer, Eugene, was able to successfully convince 33% of human judges. Turing’s original test suggested that over 30% was a pass — but clearly, there is plenty of room for improvement in the future.

The Evolution of AI

Image credit: NatWest

Since Turing’s Test, AI was limited to basic computer models — with MIT professor John McCarthy coining the phrase “artificial intelligence” in 1955. While working at MIT, he created an AI laboratory where he developed LISP (Full List Processing), a computer programming language for robotics designed around offering expansion potential as technology improved in the future.

Despite some base model machines showing promise, from the “first robotic person” Shakey the Robot in 1966, to anthropomorphic androids WABOT-1 and WABOT-2 from Waseda University – the field of AI started to plateau in the 1980’s. It wasn’t until Rodney Brooks in 1990 that the idea of computer intelligence would be revitalized.

In his seminal 1990 paper, “Elephants Don’t Play Chess”, Brooks suggested that the robotics field had been approaching the idea of artificial intelligence all wrong. Instead of creating machines that could carry out ever-more advanced singular “top-down” tasks — from playing the piano to calculating math problems — AI should be a machine-based relationship with the world around it or “bottom-up”.

It might sound obvious to us now, thanks to a lifetime rooted in the advances of AI, but back in the early 90s, the suggestion that artificial intelligence should be reactive to its surroundings was revolutionary.

The Future AI Job Market

One of the biggest “bottom-up” advances for artificial intelligence is the ability to be intuitive in planning and responding to tasks. Perhaps the biggest breakthrough in this regard came in 2016 when AlphaGo, a custom program developed by Google’s DeepMind AI unit, beat the world’s best “Go” player.

The historical Chinese board game had long been one of AI’s greatest challenges, the sheer variety of possible moves demanding players evaluate and react in countless different ways to each turn. That a program was finally able to challenge this level of “humanity” was a real breakthrough, even more than IBM’s Deep Blue over chess champion Garry Kasparov in 1996.

Because of the leap forward in intelligence, experts from across the globe now predict we will see an AI program be able to win the World Series of Poker in just two short years. Not only that, but the same reactive technology is currently being investigated by the banking sector — with Natwest’s “Cora” chatbot tipped to replace all telephone banking by 2022.

What about other job sectors? Are they too under threat from the advancement of artificial intelligence? Well, recent research from survey company Gartner suggests that 85% of customer interactions in retail will be AI-managed by 2020. The other 15%, mainly the human sales process, will take a fair while longer, with 2031 the closest estimate for full replacement.

What can be done?

Because automation has crept into modern society so slowly, it can be extremely difficult to predict how the job market will evolve as it gets ever more advanced. Perhaps the biggest challenge will be ensuring “artificial intelligence” does not lead to the mass-wipeout of several job sectors — almost certainly requiring new legislation to be passed, as well as a re-think of the employment market overall.

However, we have already seen shifts to incorporate the digital-driven advances in a variety of sectors, from banking to farming and beyond. Many predict that learning new skills early will be crucial for any affected sector, which looks set to be many of them. In short, the only way to beat the machines is to join them or at the very least know how to use them.

Commenting on the risk of artificial intelligence on the labor market, James Tweddle, AI Specialist at AI vs Humanity, said:

“The risk to the labor market from artificial intelligence is a growing one, particularly given the rapid rate at which AI seems to be developing. One of the biggest challenges for any artificial intelligence is the idea of ‘bottom-up’ learning — the ability for a machine mind to react in a situational manner rather than simply following algorithms. It is this lack of emotional intelligence within AI that gives humans the edge over robots. However, we must ensure that our skill set remains up to date if we are to compete going forward.”

TrueSight is an AIOps platform, powered by machine learning and analytics, that elevates IT operations to address multi-cloud complexity and the speed of digital transformation.

Source: by Lucia Widdop, May. 22, 18 · AI Zone · Opinion 

Competitive Intelligence: The real-world potential and limitations of artificial intelligence

The McKinsey Podcast

Artificial intelligence has the potential to create trillions of dollars of value across the economy—if business leaders work to understand what AI can and cannot do.

In this episode of the McKinsey Podcast, McKinsey Global Institute partner Michael Chui and MGI chairman and director James Manyika speak with McKinsey Publishing’s David Schwartz about the cutting edge of artificial intelligence.

Source: Podcast McKinsey Quarterly  Continue reading

Can Artificial Intelligence Replace Executive Decision Making?

Awash in data, executives dream of a time when the Jetson utopia finally manifests — and they find themselves sipping coffee and cashing checks while machines slave away for them, uncovering unexpected business insights and learning optimal ways to manage organizations.

Despite improvements in cognitive technologies, that dream managerial scenario is still far from reality. Decisions that executives face don’t necessarily fit into defined problems well suited for automation. At least for the time being, countless decisions still require human engagement.

Consider machine learning. To oversimplify, machine learning emphasizes algorithms that use numerous examples as inputs. In an ideal world, machine learning would reveal connections between observations and outcomes with minimal human guidance. In other words, machines would excel at finding patterns and making data-based predictions.

Recent advances in machine learning and cognitive technologies have been remarkable. We’ve seen impressive inroads in areas such as radiology, and accounting. Nevertheless, executives resist using these approaches for decision making for many reasons, including …

  • Algorithmic approaches typically require numerous examples. Organizations rarely have the number of examples needed to understand relationships between everything in the world that can affect an organization. This is the managerial version of the “curse of dimensionality.” The ratio of “examples of past similar decisions” to “stuff that might be important for those decisions” can be abysmally low.
  • Even with ever-increasing data collection, many known explanatory variables are still difficult to capture. Algorithmic performance is always better when more information is known, structured, and available. In particular, it is difficult to incorporate data about events that didn’t happen but could have, or that did happen but had no data collected about them.
  • Beyond that, executives can have a broad view of new information that didn’t exist before, but could make a difference in the future — such as coming legislative, regulatory, or technology changes. It is harder to make out-of-sample predictions than in-sample, particularly when extrapolating and boldly going where no data has gone before.
  • Executives don’t have multiple organizations that would enable them to make randomized A/B tests. Ideally, learning from past decisions could occur by observing similar scenarios with alternative decisions. Instead, executives must estimate counterfactuals based on limited information.

As a result, current cognitive technologies focus on the easiest problems. And while this makes sense, the questions these approaches can answer may not be the foremost question in an executive’s mind. IBM emphasizes the progress in solving difficult problems — such as helping teachers personalize curriculum — while also noting current limitations, by pointing out that managing children is a much more difficult unstructured problem than current technology can solve.

I don’t want to trivialize the current state of cognitive technologies. Progress has been so amazing and fast that we have quickly become inured to amazing progress. A definition of artificial intelligence, for example, is “systems able to perform tasks that normally require human intelligence.” But the definition of “normal” is changing. Machines that autocorrect spelling and check grammar in real time as you write would have been fantastical not too long ago; now I’m annoyed when my machine guesses my mistyping incorrectly. Uncanny predictive ability would once have led to a trial in Salem; now it leads to a corner office on Wall Street.

Cognitive technologies will increasingly absorb the easiest aspects of executive jobs; that seems inevitable. The question, then, is what changes. Like other technology changes, there will be a mix of good, bad, and ugly. Executives may be liberated from the mundane and able to use time more creatively and productively. They may face increasing competitive threats from automated virtual workers who do the easiest tasks, leaving the rest of us with the most difficult jobs. Or, worst case, executive may lose the ability to contribute at all. It’s likely to be a mix of all of these. However, it is unlikely that the dream scenario of getting paid to sip coffee while machines work is a realistic option. We’ll need to add value irrespective of how “normal” changes.

What Managers Need to Know About Artificial Intelligence

The field of artificial intelligence (AI) is finally yielding valuable smart devices and applications that do more than win games against human champions. According to a report from the Frederick S. Pardee Center for International Futures at the University of Denver, the products of AI are changing the competitive landscape in several industry sectors and are poised to upend operations in many business functions.

So, what do managers need to know about AI?

MIT Sloan Management Review and the Boston Consulting Group have joined forces to find out. Our new research initiative, Artificial Intelligence & Business Strategy, explores the most important business opportunities and challenges from AI.

  1. From a managerial perspective, what is artificial intelligence?
  2. How will AI influence business strategy?
  3. What are the major management risks from AI?

    Artificial Intelligence for Managers

    Artificial intelligence covers a diverse set of abilities, much like Howard Gardner’s breakdown of human intelligence into multiple intelligences. These abilities include deep learning, reinforcement learning, robotics, computer vision, and natural language processing. The first report from Stanford’s One Hundred Year Study on Artificial Intelligence — “Artificial Intelligence and Life in 2030” — lists 11 applications of AI altogether. Each of these represents narrow AI, which is defined as a machine-based system designed to address a specific problem (such as playing Go or chess) by refining its own solution using rules and approaches not specifically programmed by its maker.

    More general AI refers to a system that can solve many types of problems on its own and is self-aware. No such general AI system currently exists (at least none have made it into public view). Two reports — the White House Office of Science and Technology Policy’s 2016 report called “Preparing for the Future of Artificial Intelligence” and the Stanford paper mentioned earlier — offer more detail on the various types of narrow AI.

    In an MIT Sloan Management Review article, Ajay Agrawal, Joshua Gans, and Avi Goldfarb provide a managerial perspective of AI and argue that the business value of AI consists of its ability to lower the cost of prediction, just as computers lowered the cost of arithmetic. They note that when looking to assess the impact of radical technological change, one should determine which task’s cost it is reducing. For AI, they argue, that task is prediction, or the ability to take the information you have and generate information you don’t have.

    From this perspective, the current wave of AI technologies is enhancing managers’ ability to make predictions (such as identifying which job candidates will be most successful), and that the most valuable worker skills will continue to involve judgment (such as mentoring, providing emotional support, and taking ethical positions). Humans have more advanced judgment skills than computers, a state of affairs that will continue into the near future. One implication: Managerial skill sets will need to adjust as prediction tasks are given over to computers. More generally, the implications of this trend will have an impact far beyond individual skill sets.

    Implications for Business Strategy

    AI is already having an effect on the composition and deployment of workforces in a variety of industries. In their forthcoming article, Agrawal and colleagues point out that at the start of the 21st century, the set of recognized prediction problems were classic statistical questions, such as inventory management and demand forecasting, but over the last 10 years, researchers have learned that image recognition, driving, and translation may also be framed as prediction problems. As the range of tasks that are recast as prediction problems continues to evolve, the authors argue, the managerial challenges will shift from training workers in prediction-related to judgment-related skills, assessing the rate and direction of the adoption of AI technologies in order to properly time the shifting of workforce training, and developing management processes that build the most effective teams of judgment-focused humans and prediction-focused artificial intelligence agents.

    However, managing workforce change is just the beginning of AI-related strategic issues that leaders need to consider. In more data-rich business environments, for instance, the effectiveness of AI technologies will be only as good as the data they have access to, and the most valuable data may exist beyond the borders of one’s own organization.

    One implication for business strategists is to look for competitive advantage in strategic alliancesthat depend on data sharing. German automakers BMW, Daimler, and Volkswagen are a case in point. They formed an alliance to buy Berlin-based HERE, a digital mapping company, in order to create a real-time platform that shows a host of driving conditions — such as traffic congestion, estimated commute times, and weather — based on data collected from cars and trucks from each brand. None of the car brands in the alliance would have been able to create a sufficiently robust data platform without others’ participation. In addition to providing a customer service, the alliance’s data platform is expected to support business relationships with municipalities, drivers, insurance companies, and more.

  4. Risky Business With AI

    These effects on business are far from guaranteed and, importantly, will affect different organizations in different ways — and many of these are also under managerial control. Additionally, these changes bring considerable risk as well. Many of these, too, are under managerial influence. The short list below samples some of the looming threats to business from AI.

    Replacement Threat — AI promises to enhance human performance in a number of ways, but some jobs may no longer be necessary in an organization that relies on AI systems. Knowledge worker positions are not immune: Insurance companies, like Fukoku Mutual Life Insurance company in Japan, are already beginning to use AI (instead of human agents) to match customers with the right insurance plans. Another replacement threat is humans that are physically integrated with intelligent machine systems. As this labor group increases in size and acceptability, the line will blur between human intelligence and artificial intelligence; human resource departments may need a name change.

    Dependence Threat — As AI enhances human performance on making predictions and decisions, human dependence on smart machines and algorithms to do critical business tasks creates new vulnerabilities, inefficiencies and, potentially, ineffective operations. Errors and biases may creep into algorithms over time undetected, multiplying cancer-like, until some business operation goes awry.

    Security Threat — Sophisticated algorithms that steal information are a reality, and a major reason cybersecurity has become a $75-billion-per-year industry. Algorithmic thievery is creating a cybersecurity arms race. “The thing people don’t get is that cybercrime is becoming automated and it is scaling exponentially,” said Marc Goodman, a law enforcement agency adviser and the author of Future Crimes. Protecting sensitive corporate information with AI from AI is not just a problem for IT, but an important issue for a company’s senior leadership, including its board of directors.

    Privacy Threat — As more of the labor force becomes information workers, they themselves become sources of information for corporate algorithms to collect and analyze. Widespread use of these algorithms undermines even the illusion of privacy for employees, and raises many ethical issues about where to draw the line between supporting workers’ autonomy and freedom — important sources of creativity — and monitoring their activity.

    The net effects of the strategic implications of AI, along with related risks, are uncertain — but what is certain is that AI will change business.

    What’s Next?

    What counts as AI is sure to evolve. It has already. Years ago, pundits and theorists alike believed that any computer smart enough to beat a chess champion would constitute a form of AI, but when Deep Blue defeated then-champion Garry Kasparov at chess in 1997, not even IBM characterized the win as a triumph of AI. The bar for AI moved, as it will inevitably move again.

    Management threats from AI will also surely change, as will AI’s implications for business strategy. Understanding these changes and their implications for management is the driving force behind MIT SMR’s new research initiative about AI and strategy. On behalf of our research team, we look forward to sharing the results of that research with you over the course of this year.

A Strategist’s Guide to Artificial Intelligence

As the conceptual side of computer science becomes practical and relevant to business, companies must decide what type of AI role they should play.

Jeff Heepke knows where to plant corn on his 4,500-acre farm in Illinois because of artificial intelligence (AI). He uses a smartphone app called Climate Basic, which divides Heepke’s farmland (and, in fact, the entire continental U.S.) into plots that are 10 meters square. The app draws on local temperature and erosion records, expected precipitation, soil quality, and other agricultural data to determine how to maximize yields for each plot. If a rainy cold front is expected to pass by, Heepke knows which areas to avoid watering or irrigating that afternoon. As the U.S. Department of Agriculture noted, this use of artificial intelligence across the industry has produced the largest crops in the country’s history.

Climate Corporation, the Silicon Valley–based developer of Climate Basic, also offers a more advanced AI app that operates autonomously. If a storm hits a region, or a drought occurs, it adjusts local yield numbers downward. Farmers who have bought insurance to supplement their government coverage get a check; no questions asked, no paper filing necessary. The insurance companies and farmers both benefit from having a much less labor-intensive, more streamlined, and less expensive automated claims process.

Monsanto paid nearly US$1 billion to buy Climate Corporation in 2013, giving the company’s models added legitimacy. Since then, Monsanto has continued to upgrade the AI models, integrating data from farm equipment and sensors planted in the fields so that they improve their accuracy and insight as more data is fed into them. One result is a better understanding of climate change and its effects — for example, the northward migration of arable land for corn, or the increasing frequency of severe storms.

Applications like this are typical of the new wave of artificial intelligence in business. AI is generating new approaches to business models, operations, and the deployment of people that are likely to fundamentally change the way business operates. And if it can transform an earthbound industry like agriculture, how long will it be before your company is affected?

An Unavoidable Opportunity

Many business leaders are keenly aware of the potential value of artificial intelligence, but are not yet poised to take advantage of it. In PwC’s 2017 Digital IQ survey of senior executives worldwide, 54 percent of the respondents said they were making substantial investments in AI today. But only 20 percent said their organizations had the skills necessary to succeed with this technology (see “Winning with Digital Confidence,” by Chris Curran and Tom Puthiyamadam).

Reports on artificial intelligence tend to portray it as either a servant, making all technology more responsive, or an overlord, eliminating jobs and destroying privacy. But for business decision makers, AI is primarily an enabler of productivity. It will eliminate jobs, to be sure, but it will also fundamentally change work processes and might create jobs in the long run. The nature of decision making, collaboration, creative art, and scientific research will all be affected; so will enterprise structures. Technological systems, including potentially your products and services, as well as your office and factory equipment, will respond to people (and one another) in ways that feel as if they are coming to life.

In their book Artificial Intelligence: A Modern Approach (Pearson, 1995), Stuart Russell and Peter Norvig define AI as “the designing and building of intelligent agents that receive percepts from the environment and take actions that affect that environment.” The most critical difference between AI and general-purpose software is in the phrase “take actions.” AI enables machines to respond on their own to signals from the world at large, signals that programmers do not directly control and therefore can’t anticipate.

The fastest-growing category of AI is machine learning, or the ability of software to improve its own activity by analyzing interactions with the world at large (see “The Road to Deep Learning,” below). This technology, which has been a continual force in the history of computing since the 1940s, has grown dramatically in sophistication during the last few years.

The Road to Deep Learning

This may be the first moment in AI’s history when a majority of experts agree the technology has practical value. From its conceptual beginnings in the 1950s, led by legendary computer scientists such as Marvin Minsky and John McCarthy, its future viability has been the subject of fierce debate. As recently as 2000, the most proficient AI system was roughly comparable, in complexity, to the brain of a worm. Then, as high-bandwidth networking, cloud computing, and high-powered graphics-enabled microprocessors emerged, researchers began building multilayered neural networks — still extremely slow and limited in comparison with natural brains, but useful in practical ways.

The best-known AI triumphs — in which software systems beat expert human players in Jeopardy, chess, Go, poker, and soccer — differ from most day-to-day business applications. These games have prescribed rules and well-defined outcomes; every game ends in a win, loss, or tie. The games are also closed-loop systems: They affect only the players, not outsiders. The software can be trained through multiple failures with no serious risks. You can’t say the same of an autonomous vehicle crash, a factory failure, or a mistranslation.

There are currently two main schools of thought on how to develop the inference capabilities necessary for AI programs to navigate through the complexities of everyday life. In both, programs learn from experience — that is, the responses and reactions they get influence the way the programs act thereafter. The first approach uses conditional instructions (also known as heuristics) to accomplish this. For instance, an AI bot would interpret the emotions in a conversation by following a program that instructed it to start by checking for emotions that were evident in the recent past.

The second approach is known as machine learning. The machine is taught, using specific examples, to make inferences about the world around it. It then builds its understanding through this inference-making ability, without following specific instructions to do so. The Google search engine’s “next-word completion” feature is a good example of machine learning. Type in the word artificial, and several suggestions for the next word will appear, perhaps intelligenceselection, and insemination. No one has programmed it to seek those complements. Google chose the strategy of looking for the three words most frequently typed after artificial. With huge amounts of data available, machine learning can provide uncanny accuracy about patterns of behavior.

The type of machine learning called deep learning has become increasingly important. A deep learning system is a multilayered neural network that learns representations of the world and stores them as a nested hierarchy of concepts many layers deep. For example, when processing thousands of images, it recognizes objects based on a hierarchy of simpler building blocks: straight lines and curved lines at the basic level, then eyes, mouths, and noses, and then faces, and then specific facial features. Besides image recognition, deep learning appears to be a promising way to approach complex challenges such as speech comprehension, human–machine conversation, language translation, and vehicle navigation (see Exhibit A).

Though it is the closest machine to a human brain, a deep learning neural network is not suitable for all problems. It requires multiple processors with enormous computing power, far beyond conventional IT architecture; it will learn only by processing enormous amounts of data; and its decision processes are not transparent.

News aggregation software, for example, had long relied on rudimentary AI to curate articles based on people’s requests. Then it evolved to analyze behavior, tracking the way people clicked on articles and the time they spent reading, and adjusting the selections accordingly. Next it aggregated individual users’ behavior with the larger population, particularly those who had similar media habits. Now it is incorporating broader data about the way readers’ interests change over time, to anticipate what people are likely to want to see next, even if they have never clicked on that topic before. Tomorrow’s AI aggregators will be able to detect and counter “fake news” by scanning for inconsistencies and routing people to alternative perspectives.

AI applications in daily use include all smartphone digital assistants, email programs that sort entries by importance, voice recognition systems, image recognition apps such as Facebook Picture Search, digital assistants such as Amazon Echo and Google Home, and much of the emerging Industrial Internet. Some AI apps are targeted at minor frustrations — DoNotPay, an online legal bot, has reversed thousands of parking tickets — and others, such as connected car and language translation technologies, represent fundamental shifts in the way people live. A growing number are aimed at improving human behavior; for instance, GM’s 2016 Chevrolet Malibu feeds data from sensors into a backseat driver–like guidance system for teenagers at the wheel.

Despite all this activity, the market for AI is still small. Market research firm Tractica estimated 2016 revenues at just $644 million. But it expects hockey stick–style growth, reaching $15 billion by 2022 and accelerating thereafter. In late 2016, there were about 1,500 AI-related startups in the U.S. alone, and total funding in 2016 reached a record $5 billion. Google, Facebook, Microsoft, Salesforce.com, and other tech companies are snapping up AI software companies, and large, established companies are recruiting deep learning talent and, like Monsanto, buying AI companies specializing in their markets. To make the most of this technology in your enterprise, consider the three main ways that businesses can or will use AI:

• Assisted intelligence, now widely available, improves what people and organizations are already doing.

• Augmented intelligence, emerging today, enables organizations and people to do things they couldn’t otherwise do.

• Autonomous intelligence, being developed for the future, creates and deploys machines that act on their own.

Many companies will make investments in all three during the next few years, drawing from a wide variety of applications (see Exhibit 1). They complement one another, but require different types of investment, different staffing considerations, and different business models.

Assisted Intelligence

Assisted intelligence amplifies the value of existing activity. For example, Google’s Gmail sorts incoming email into “Primary,” “Social,” and “Promotion” default tabs. The algorithm, trained with data from millions of other users’ emails, makes people more efficient without changing the way they use email or altering the value it provides.

Assisted intelligence tends to involve clearly defined, rules-based, repeatable tasks. These include automated assembly lines and other uses of physical robots; robotic process automation, in which software-based agents simulate the online activities of a human being; and back-office functions such as billing, finance, and regulatory compliance. This form of AI can be used to verify and cross-check data — for example, when paper checks are read and verified by a bank’s ATM. Assisted intelligence has already become common in some enterprise software processes. In “opportunity to order” (basic sales) and “order to cash” (receiving and processing customer orders), the software offers guidance and direction that was formerly available only from people.

The Oscar W. Larson Company used assisted intelligence to improve its field service operations. This is a 70-plus-year-old family-owned general contractor, which among other services to the oil and gas industry, provides maintenance and repair for point-of-sales systems and fuel dispensers at gas stations. One costly and irritating problem is “truck rerolls”: service calls that have to be rescheduled because the technician lacks the tools, parts, or expertise for a particular issue. After analyzing data on service calls, the AI software showed how to reduce truck rerolls by 20 percent, a rate that should continue to improve as the software learns to recognize more patterns.

Assisted intelligence apps often involve computer models of complex realities that allow businesses to test decisions with less risk. For example, one auto manufacturer has developed a simulation of consumer behavior, incorporating data about the types of trips people make, the ways those affect supply and demand for motor vehicles, and the variations in those patterns for different city topologies, marketing approaches, and vehicle price ranges. The model spells out more than 200,000 variations for the automaker to consider and simulates the potential success of any tested variation, thus assisting in the design of car launches. As the automaker introduces new cars and the simulator incorporates the data on outcomes from each launch, the model’s predictions will become ever more accurate.

AI-based packages of this sort are available on more and more enterprise software platforms. Success with assisted intelligence should lead to improvements in conventional business metrics such as labor productivity, revenues or margins per employee, and average time to completion for processes. Much of the cost involved is in the staff you hire, who must be skilled at marshaling and interpreting data. To evaluate where to deploy assisted intelligence, consider two questions: What products or services could you easily make more marketable if they were more automatically responsive to your customers? Which of your current processes and practices, including your decision-making practices, would be more powerful with more intelligence?

Augmented Intelligence

Augmented intelligence software lends new capability to human activity, permitting enterprises to do things they couldn’t do before. Unlike assisted intelligence, it fundamentally alters the nature of the task, and business models change accordingly.

For example, Netflix uses machine learning algorithms to do something media has never done before: suggest choices customers would probably not have found themselves, based not just on the customer’s patterns of behavior, but on those of the audience at large. A Netflix user, unlike a cable TV pay-per-view customer, can easily switch from one premium video to another without penalty, after just a few minutes. This gives consumers more control over their time. They use it to choose videos more tailored to the way they feel at any given moment. Every time that happens, the system records that observation and adjusts its recommendation list — and it enables Netflix to tailor its next round of videos to user preferences more accurately. This leads to reduced costs and higher profits per movie, and a more enthusiastic audience, which then enables more investments in personalization (and AI). Left outside this virtuous circle are conventional advertising and television networks. No wonder other video channels, such as HBO and Amazon, as well as recorded music channels such as Spotify, have moved to similar models.

Over time, as algorithms grow more sophisticated, the symbiotic relationship between human and AI will further change entertainment industry practices. The unit of viewing decision will probably become the scene, not the story; algorithms will link scenes to audience emotions. A consumer might ask to see only scenes where a Meryl Streep character is falling in love, or to trace a particular type of swordplay from one action movie to another. Data accumulating from these choices will further refine the ability of the entertainment industry to spark people’s emotions, satisfy their curiosity, and gain their loyalty.

Another current use of augmented intelligence is in legal research. Though most cases are searchable online, finding relevant precedents still requires many hours of sifting through past opinions. Luminance, a startup specializing in legal research, can run through thousands of cases in a very short time, providing inferences about their relevance to a current proceeding. Systems like these don’t yet replace human legal research. But they dramatically reduce the rote work conducted by associate attorneys, a job rated as the least satisfying in the United States. Similar applications are emerging for other types of data sifting, including financial audits, interpreting regulations, finding patterns in epidemiological data, and (as noted above) farming.

To develop applications like these, you’ll need to marshal your own imagination to look for products, services, or processes that would not be possible at all without AI. For example, an AI system can track a wide number of product features, warranty costs, repeat purchase rates, and more general purchasing metrics, bringing only unusual or noteworthy correlations to your attention. Are a high number of repairs associated with a particular region, material, or line of products? Could you use this information to redesign your products, avoid recalls, or spark innovation in some way?

The success of an augmented intelligence effort depends on whether it has enabled your company to do new things. To assess this capability, track your margins, innovation cycles, customer experience, and revenue growth as potential proxies. Also watch your impact on disruption: Are your new innovations doing to some part of the business ecosystem what, say, ride-hailing services are doing to conventional taxi companies?

You won’t find many off-the-shelf applications for augmented intelligence. They involve advanced forms of machine learning and natural language processing, plus specialized interfaces tailored to your company and industry. However, you can build bespoke augmented intelligence applications on cloud-based enterprise platforms, most of which allow modifications in open source code. Given the unstructured nature of your most critical decision processes, an augmented intelligence application would require voluminous historical data from your own company, along with data from the rest of your industry and related fields (such as demographics). This will help the system distinguish external factors, such as competition and economic conditions, from the impact of your own decisions.

The greatest change from augmented intelligence may be felt by senior decision makers, as the new models often give them new alternatives to consider that don’t match their past experience or gut feelings. They should be open to those alternatives, but also skeptical. AI systems are not infallible; just like any human guide, they must show consistency, explain their decisions, and counter biases, or they will lose their value.

Autonomous Intelligence

Very few autonomous intelligence systems — systems that make decisions without direct human involvement or oversight — are in widespread use today. Early examples include automated trading in the stock market (about 75 percent of Nasdaq trading is conducted autonomously) and facial recognition. In some circumstances, algorithms are better than people at identifying other people. Other early examples include robots that dispose of bombs, gather deep-sea data, maintain space stations, and perform other tasks inherently unsafe for people.

The most eagerly anticipated forms of autonomous intelligence — self-driving cars and full-fledged language translation programs — are not yet ready for general use. The closest autonomous service so far is Tencent’s messaging and social media platform WeChat, which has close to 800 million daily active users, most of them in China. The program, which was designed primarily for use on smartphones, offers relatively sophisticated voice recognition, Chinese-to-English language translation, facial recognition (including suggestions of celebrities who look like the person holding the phone), and virtual bot friends that can play guessing games. Notwithstanding their cleverness and their pioneering use of natural language processing, these are still niche applications, and still very limited by technology. Some of the most popular AI apps, for example, are small, menu- and rule-driven programs, which conduct fairly rudimentary conversations around a limited group of options.

Despite the lead time required to bring the technology further along, any business prepared to base a strategy on advanced digital technology should be thinking seriously about autonomous intelligence now. The Internet of Things will generate vast amounts of information, more than humans can reasonably interpret. In commercial aircraft, for example, so much flight data is gathered that engineers can’t process it all; thus, Boeing has announced a $7.5 million partnership with Carnegie Mellon University, along with other efforts to develop AI systems that can, for example, predict when airplanes will need maintenance. Autonomous intelligence’s greatest challenge may not be technological at all — it may be companies’ ability to build in enough transparency for people to trust these systems to act in their best interest.

First Steps

As you contemplate the introduction of artificial intelligence, articulate what mix of the three approaches works best for you.

• Are you primarily interested in upgrading your existing processes, reducing costs, and improving productivity? If so, then start with assisted intelligence, probably with a small group of services from a cloud-based provider.

• Do you seek to build your business around something new — responsive and self-driven products, or services and experiences that incorporate AI? Then pursue an augmented intelligence approach, probably with more complex AI applications resident on the cloud.

• Are you developing a genuinely new technology? Most companies will be better off primarily using someone else’s AI platforms, but if you can justify building your own, you may become one of the leaders in your market.

The transition among these forms of AI is not clean-cut; they sit on a continuum. In developing their own AI strategy, many companies begin somewhere between assisted and augmented, while expecting to move toward autonomous eventually (see Exhibit 2).

Though investments in AI may seem expensive now, the costs will decline over the next 10 years as the software becomes more commoditized. “As this technology continues to mature,” writes Daniel Eckert, a managing director in emerging technology services for PwC US, “we should see the price adhere toward a utility model and flatten out. We expect a tiered pricing model to be introduced: a free (or freemium model) for simple activities, and a premium model for discrete, business-differentiating services.”

AI is often sold on the premise that it will replace human labor at lower cost — and the effect on employment could be devastating, though no one knows for sure. Carl Benedikt Frey and Michael Osborne of Oxford University’s engineering school have calculated that AI will put 47 percent of the jobs in the U.S. at risk; a 2016 Forrester research report estimated it at 6 percent, at least by 2025. On the other hand, Baidu Research head (and deep learning pioneer) Andrew Ng recently said, “AI is the new electricity,” meaning that it will be found everywhere and create new jobs that weren’t imaginable before its appearance.

At the same time that AI threatens the loss of an almost unimaginable number of jobs, it is also a hungry, unsatisfied employer. The lack of capable talent — people skilled in deep learning technology and analytics — may well turn out to be the biggest obstacle for large companies. The greatest opportunities may thus be for independent businesspeople, including farmers like Jeff Heepke, who no longer need scale to compete with large companies, because AI has leveled the playing field.

It is still too early to say which types of companies will be the most successful in this area — and we don’t yet have an AI model to predict it for us. In the end, we cannot even say for sure that the companies that enter the field first will be the most successful. The dominant players will be those that, like Climate Corporation, Oscar W. Larson, Netflix, and many other companies large and small, have taken AI to heart as a way to become far more capable, in a far more relevant way, than they otherwise would ever be.

Reprint No. 17210

Author Profile:

  • Anand Rao is a principal with PwC US based in Boston. He is an innovation leader for PwC’s data and analytics consulting services. He holds a Ph.D. in artificial intelligence from the University of Sydney and was formerly chief research scientist at the Australian Artificial Intelligence Institute.
  • Also contributing to this article were PwC principal and assurance innovation leader Michael Baccala, PwC senior research fellow Alan Morrison, and writer Michael Fitzgerald.