What Managers Need to Know About Artificial Intelligence

The field of artificial intelligence (AI) is finally yielding valuable smart devices and applications that do more than win games against human champions. According to a report from the Frederick S. Pardee Center for International Futures at the University of Denver, the products of AI are changing the competitive landscape in several industry sectors and are poised to upend operations in many business functions.

So, what do managers need to know about AI?

MIT Sloan Management Review and the Boston Consulting Group have joined forces to find out. Our new research initiative, Artificial Intelligence & Business Strategy, explores the most important business opportunities and challenges from AI.

  1. From a managerial perspective, what is artificial intelligence?
  2. How will AI influence business strategy?
  3. What are the major management risks from AI?

    Artificial Intelligence for Managers

    Artificial intelligence covers a diverse set of abilities, much like Howard Gardner’s breakdown of human intelligence into multiple intelligences. These abilities include deep learning, reinforcement learning, robotics, computer vision, and natural language processing. The first report from Stanford’s One Hundred Year Study on Artificial Intelligence — “Artificial Intelligence and Life in 2030” — lists 11 applications of AI altogether. Each of these represents narrow AI, which is defined as a machine-based system designed to address a specific problem (such as playing Go or chess) by refining its own solution using rules and approaches not specifically programmed by its maker.

    More general AI refers to a system that can solve many types of problems on its own and is self-aware. No such general AI system currently exists (at least none have made it into public view). Two reports — the White House Office of Science and Technology Policy’s 2016 report called “Preparing for the Future of Artificial Intelligence” and the Stanford paper mentioned earlier — offer more detail on the various types of narrow AI.

    In an MIT Sloan Management Review article, Ajay Agrawal, Joshua Gans, and Avi Goldfarb provide a managerial perspective of AI and argue that the business value of AI consists of its ability to lower the cost of prediction, just as computers lowered the cost of arithmetic. They note that when looking to assess the impact of radical technological change, one should determine which task’s cost it is reducing. For AI, they argue, that task is prediction, or the ability to take the information you have and generate information you don’t have.

    From this perspective, the current wave of AI technologies is enhancing managers’ ability to make predictions (such as identifying which job candidates will be most successful), and that the most valuable worker skills will continue to involve judgment (such as mentoring, providing emotional support, and taking ethical positions). Humans have more advanced judgment skills than computers, a state of affairs that will continue into the near future. One implication: Managerial skill sets will need to adjust as prediction tasks are given over to computers. More generally, the implications of this trend will have an impact far beyond individual skill sets.

    Implications for Business Strategy

    AI is already having an effect on the composition and deployment of workforces in a variety of industries. In their forthcoming article, Agrawal and colleagues point out that at the start of the 21st century, the set of recognized prediction problems were classic statistical questions, such as inventory management and demand forecasting, but over the last 10 years, researchers have learned that image recognition, driving, and translation may also be framed as prediction problems. As the range of tasks that are recast as prediction problems continues to evolve, the authors argue, the managerial challenges will shift from training workers in prediction-related to judgment-related skills, assessing the rate and direction of the adoption of AI technologies in order to properly time the shifting of workforce training, and developing management processes that build the most effective teams of judgment-focused humans and prediction-focused artificial intelligence agents.

    However, managing workforce change is just the beginning of AI-related strategic issues that leaders need to consider. In more data-rich business environments, for instance, the effectiveness of AI technologies will be only as good as the data they have access to, and the most valuable data may exist beyond the borders of one’s own organization.

    One implication for business strategists is to look for competitive advantage in strategic alliancesthat depend on data sharing. German automakers BMW, Daimler, and Volkswagen are a case in point. They formed an alliance to buy Berlin-based HERE, a digital mapping company, in order to create a real-time platform that shows a host of driving conditions — such as traffic congestion, estimated commute times, and weather — based on data collected from cars and trucks from each brand. None of the car brands in the alliance would have been able to create a sufficiently robust data platform without others’ participation. In addition to providing a customer service, the alliance’s data platform is expected to support business relationships with municipalities, drivers, insurance companies, and more.

  4. Risky Business With AI

    These effects on business are far from guaranteed and, importantly, will affect different organizations in different ways — and many of these are also under managerial control. Additionally, these changes bring considerable risk as well. Many of these, too, are under managerial influence. The short list below samples some of the looming threats to business from AI.

    Replacement Threat — AI promises to enhance human performance in a number of ways, but some jobs may no longer be necessary in an organization that relies on AI systems. Knowledge worker positions are not immune: Insurance companies, like Fukoku Mutual Life Insurance company in Japan, are already beginning to use AI (instead of human agents) to match customers with the right insurance plans. Another replacement threat is humans that are physically integrated with intelligent machine systems. As this labor group increases in size and acceptability, the line will blur between human intelligence and artificial intelligence; human resource departments may need a name change.

    Dependence Threat — As AI enhances human performance on making predictions and decisions, human dependence on smart machines and algorithms to do critical business tasks creates new vulnerabilities, inefficiencies and, potentially, ineffective operations. Errors and biases may creep into algorithms over time undetected, multiplying cancer-like, until some business operation goes awry.

    Security Threat — Sophisticated algorithms that steal information are a reality, and a major reason cybersecurity has become a $75-billion-per-year industry. Algorithmic thievery is creating a cybersecurity arms race. “The thing people don’t get is that cybercrime is becoming automated and it is scaling exponentially,” said Marc Goodman, a law enforcement agency adviser and the author of Future Crimes. Protecting sensitive corporate information with AI from AI is not just a problem for IT, but an important issue for a company’s senior leadership, including its board of directors.

    Privacy Threat — As more of the labor force becomes information workers, they themselves become sources of information for corporate algorithms to collect and analyze. Widespread use of these algorithms undermines even the illusion of privacy for employees, and raises many ethical issues about where to draw the line between supporting workers’ autonomy and freedom — important sources of creativity — and monitoring their activity.

    The net effects of the strategic implications of AI, along with related risks, are uncertain — but what is certain is that AI will change business.

    What’s Next?

    What counts as AI is sure to evolve. It has already. Years ago, pundits and theorists alike believed that any computer smart enough to beat a chess champion would constitute a form of AI, but when Deep Blue defeated then-champion Garry Kasparov at chess in 1997, not even IBM characterized the win as a triumph of AI. The bar for AI moved, as it will inevitably move again.

    Management threats from AI will also surely change, as will AI’s implications for business strategy. Understanding these changes and their implications for management is the driving force behind MIT SMR’s new research initiative about AI and strategy. On behalf of our research team, we look forward to sharing the results of that research with you over the course of this year.

A Strategist’s Guide to Artificial Intelligence

As the conceptual side of computer science becomes practical and relevant to business, companies must decide what type of AI role they should play.

Jeff Heepke knows where to plant corn on his 4,500-acre farm in Illinois because of artificial intelligence (AI). He uses a smartphone app called Climate Basic, which divides Heepke’s farmland (and, in fact, the entire continental U.S.) into plots that are 10 meters square. The app draws on local temperature and erosion records, expected precipitation, soil quality, and other agricultural data to determine how to maximize yields for each plot. If a rainy cold front is expected to pass by, Heepke knows which areas to avoid watering or irrigating that afternoon. As the U.S. Department of Agriculture noted, this use of artificial intelligence across the industry has produced the largest crops in the country’s history.

Climate Corporation, the Silicon Valley–based developer of Climate Basic, also offers a more advanced AI app that operates autonomously. If a storm hits a region, or a drought occurs, it adjusts local yield numbers downward. Farmers who have bought insurance to supplement their government coverage get a check; no questions asked, no paper filing necessary. The insurance companies and farmers both benefit from having a much less labor-intensive, more streamlined, and less expensive automated claims process.

Monsanto paid nearly US$1 billion to buy Climate Corporation in 2013, giving the company’s models added legitimacy. Since then, Monsanto has continued to upgrade the AI models, integrating data from farm equipment and sensors planted in the fields so that they improve their accuracy and insight as more data is fed into them. One result is a better understanding of climate change and its effects — for example, the northward migration of arable land for corn, or the increasing frequency of severe storms.

Applications like this are typical of the new wave of artificial intelligence in business. AI is generating new approaches to business models, operations, and the deployment of people that are likely to fundamentally change the way business operates. And if it can transform an earthbound industry like agriculture, how long will it be before your company is affected?

An Unavoidable Opportunity

Many business leaders are keenly aware of the potential value of artificial intelligence, but are not yet poised to take advantage of it. In PwC’s 2017 Digital IQ survey of senior executives worldwide, 54 percent of the respondents said they were making substantial investments in AI today. But only 20 percent said their organizations had the skills necessary to succeed with this technology (see “Winning with Digital Confidence,” by Chris Curran and Tom Puthiyamadam).

Reports on artificial intelligence tend to portray it as either a servant, making all technology more responsive, or an overlord, eliminating jobs and destroying privacy. But for business decision makers, AI is primarily an enabler of productivity. It will eliminate jobs, to be sure, but it will also fundamentally change work processes and might create jobs in the long run. The nature of decision making, collaboration, creative art, and scientific research will all be affected; so will enterprise structures. Technological systems, including potentially your products and services, as well as your office and factory equipment, will respond to people (and one another) in ways that feel as if they are coming to life.

In their book Artificial Intelligence: A Modern Approach (Pearson, 1995), Stuart Russell and Peter Norvig define AI as “the designing and building of intelligent agents that receive percepts from the environment and take actions that affect that environment.” The most critical difference between AI and general-purpose software is in the phrase “take actions.” AI enables machines to respond on their own to signals from the world at large, signals that programmers do not directly control and therefore can’t anticipate.

The fastest-growing category of AI is machine learning, or the ability of software to improve its own activity by analyzing interactions with the world at large (see “The Road to Deep Learning,” below). This technology, which has been a continual force in the history of computing since the 1940s, has grown dramatically in sophistication during the last few years.

The Road to Deep Learning

This may be the first moment in AI’s history when a majority of experts agree the technology has practical value. From its conceptual beginnings in the 1950s, led by legendary computer scientists such as Marvin Minsky and John McCarthy, its future viability has been the subject of fierce debate. As recently as 2000, the most proficient AI system was roughly comparable, in complexity, to the brain of a worm. Then, as high-bandwidth networking, cloud computing, and high-powered graphics-enabled microprocessors emerged, researchers began building multilayered neural networks — still extremely slow and limited in comparison with natural brains, but useful in practical ways.

The best-known AI triumphs — in which software systems beat expert human players in Jeopardy, chess, Go, poker, and soccer — differ from most day-to-day business applications. These games have prescribed rules and well-defined outcomes; every game ends in a win, loss, or tie. The games are also closed-loop systems: They affect only the players, not outsiders. The software can be trained through multiple failures with no serious risks. You can’t say the same of an autonomous vehicle crash, a factory failure, or a mistranslation.

There are currently two main schools of thought on how to develop the inference capabilities necessary for AI programs to navigate through the complexities of everyday life. In both, programs learn from experience — that is, the responses and reactions they get influence the way the programs act thereafter. The first approach uses conditional instructions (also known as heuristics) to accomplish this. For instance, an AI bot would interpret the emotions in a conversation by following a program that instructed it to start by checking for emotions that were evident in the recent past.

The second approach is known as machine learning. The machine is taught, using specific examples, to make inferences about the world around it. It then builds its understanding through this inference-making ability, without following specific instructions to do so. The Google search engine’s “next-word completion” feature is a good example of machine learning. Type in the word artificial, and several suggestions for the next word will appear, perhaps intelligenceselection, and insemination. No one has programmed it to seek those complements. Google chose the strategy of looking for the three words most frequently typed after artificial. With huge amounts of data available, machine learning can provide uncanny accuracy about patterns of behavior.

The type of machine learning called deep learning has become increasingly important. A deep learning system is a multilayered neural network that learns representations of the world and stores them as a nested hierarchy of concepts many layers deep. For example, when processing thousands of images, it recognizes objects based on a hierarchy of simpler building blocks: straight lines and curved lines at the basic level, then eyes, mouths, and noses, and then faces, and then specific facial features. Besides image recognition, deep learning appears to be a promising way to approach complex challenges such as speech comprehension, human–machine conversation, language translation, and vehicle navigation (see Exhibit A).

Though it is the closest machine to a human brain, a deep learning neural network is not suitable for all problems. It requires multiple processors with enormous computing power, far beyond conventional IT architecture; it will learn only by processing enormous amounts of data; and its decision processes are not transparent.

News aggregation software, for example, had long relied on rudimentary AI to curate articles based on people’s requests. Then it evolved to analyze behavior, tracking the way people clicked on articles and the time they spent reading, and adjusting the selections accordingly. Next it aggregated individual users’ behavior with the larger population, particularly those who had similar media habits. Now it is incorporating broader data about the way readers’ interests change over time, to anticipate what people are likely to want to see next, even if they have never clicked on that topic before. Tomorrow’s AI aggregators will be able to detect and counter “fake news” by scanning for inconsistencies and routing people to alternative perspectives.

AI applications in daily use include all smartphone digital assistants, email programs that sort entries by importance, voice recognition systems, image recognition apps such as Facebook Picture Search, digital assistants such as Amazon Echo and Google Home, and much of the emerging Industrial Internet. Some AI apps are targeted at minor frustrations — DoNotPay, an online legal bot, has reversed thousands of parking tickets — and others, such as connected car and language translation technologies, represent fundamental shifts in the way people live. A growing number are aimed at improving human behavior; for instance, GM’s 2016 Chevrolet Malibu feeds data from sensors into a backseat driver–like guidance system for teenagers at the wheel.

Despite all this activity, the market for AI is still small. Market research firm Tractica estimated 2016 revenues at just $644 million. But it expects hockey stick–style growth, reaching $15 billion by 2022 and accelerating thereafter. In late 2016, there were about 1,500 AI-related startups in the U.S. alone, and total funding in 2016 reached a record $5 billion. Google, Facebook, Microsoft, Salesforce.com, and other tech companies are snapping up AI software companies, and large, established companies are recruiting deep learning talent and, like Monsanto, buying AI companies specializing in their markets. To make the most of this technology in your enterprise, consider the three main ways that businesses can or will use AI:

• Assisted intelligence, now widely available, improves what people and organizations are already doing.

• Augmented intelligence, emerging today, enables organizations and people to do things they couldn’t otherwise do.

• Autonomous intelligence, being developed for the future, creates and deploys machines that act on their own.

Many companies will make investments in all three during the next few years, drawing from a wide variety of applications (see Exhibit 1). They complement one another, but require different types of investment, different staffing considerations, and different business models.

Assisted Intelligence

Assisted intelligence amplifies the value of existing activity. For example, Google’s Gmail sorts incoming email into “Primary,” “Social,” and “Promotion” default tabs. The algorithm, trained with data from millions of other users’ emails, makes people more efficient without changing the way they use email or altering the value it provides.

Assisted intelligence tends to involve clearly defined, rules-based, repeatable tasks. These include automated assembly lines and other uses of physical robots; robotic process automation, in which software-based agents simulate the online activities of a human being; and back-office functions such as billing, finance, and regulatory compliance. This form of AI can be used to verify and cross-check data — for example, when paper checks are read and verified by a bank’s ATM. Assisted intelligence has already become common in some enterprise software processes. In “opportunity to order” (basic sales) and “order to cash” (receiving and processing customer orders), the software offers guidance and direction that was formerly available only from people.

The Oscar W. Larson Company used assisted intelligence to improve its field service operations. This is a 70-plus-year-old family-owned general contractor, which among other services to the oil and gas industry, provides maintenance and repair for point-of-sales systems and fuel dispensers at gas stations. One costly and irritating problem is “truck rerolls”: service calls that have to be rescheduled because the technician lacks the tools, parts, or expertise for a particular issue. After analyzing data on service calls, the AI software showed how to reduce truck rerolls by 20 percent, a rate that should continue to improve as the software learns to recognize more patterns.

Assisted intelligence apps often involve computer models of complex realities that allow businesses to test decisions with less risk. For example, one auto manufacturer has developed a simulation of consumer behavior, incorporating data about the types of trips people make, the ways those affect supply and demand for motor vehicles, and the variations in those patterns for different city topologies, marketing approaches, and vehicle price ranges. The model spells out more than 200,000 variations for the automaker to consider and simulates the potential success of any tested variation, thus assisting in the design of car launches. As the automaker introduces new cars and the simulator incorporates the data on outcomes from each launch, the model’s predictions will become ever more accurate.

AI-based packages of this sort are available on more and more enterprise software platforms. Success with assisted intelligence should lead to improvements in conventional business metrics such as labor productivity, revenues or margins per employee, and average time to completion for processes. Much of the cost involved is in the staff you hire, who must be skilled at marshaling and interpreting data. To evaluate where to deploy assisted intelligence, consider two questions: What products or services could you easily make more marketable if they were more automatically responsive to your customers? Which of your current processes and practices, including your decision-making practices, would be more powerful with more intelligence?

Augmented Intelligence

Augmented intelligence software lends new capability to human activity, permitting enterprises to do things they couldn’t do before. Unlike assisted intelligence, it fundamentally alters the nature of the task, and business models change accordingly.

For example, Netflix uses machine learning algorithms to do something media has never done before: suggest choices customers would probably not have found themselves, based not just on the customer’s patterns of behavior, but on those of the audience at large. A Netflix user, unlike a cable TV pay-per-view customer, can easily switch from one premium video to another without penalty, after just a few minutes. This gives consumers more control over their time. They use it to choose videos more tailored to the way they feel at any given moment. Every time that happens, the system records that observation and adjusts its recommendation list — and it enables Netflix to tailor its next round of videos to user preferences more accurately. This leads to reduced costs and higher profits per movie, and a more enthusiastic audience, which then enables more investments in personalization (and AI). Left outside this virtuous circle are conventional advertising and television networks. No wonder other video channels, such as HBO and Amazon, as well as recorded music channels such as Spotify, have moved to similar models.

Over time, as algorithms grow more sophisticated, the symbiotic relationship between human and AI will further change entertainment industry practices. The unit of viewing decision will probably become the scene, not the story; algorithms will link scenes to audience emotions. A consumer might ask to see only scenes where a Meryl Streep character is falling in love, or to trace a particular type of swordplay from one action movie to another. Data accumulating from these choices will further refine the ability of the entertainment industry to spark people’s emotions, satisfy their curiosity, and gain their loyalty.

Another current use of augmented intelligence is in legal research. Though most cases are searchable online, finding relevant precedents still requires many hours of sifting through past opinions. Luminance, a startup specializing in legal research, can run through thousands of cases in a very short time, providing inferences about their relevance to a current proceeding. Systems like these don’t yet replace human legal research. But they dramatically reduce the rote work conducted by associate attorneys, a job rated as the least satisfying in the United States. Similar applications are emerging for other types of data sifting, including financial audits, interpreting regulations, finding patterns in epidemiological data, and (as noted above) farming.

To develop applications like these, you’ll need to marshal your own imagination to look for products, services, or processes that would not be possible at all without AI. For example, an AI system can track a wide number of product features, warranty costs, repeat purchase rates, and more general purchasing metrics, bringing only unusual or noteworthy correlations to your attention. Are a high number of repairs associated with a particular region, material, or line of products? Could you use this information to redesign your products, avoid recalls, or spark innovation in some way?

The success of an augmented intelligence effort depends on whether it has enabled your company to do new things. To assess this capability, track your margins, innovation cycles, customer experience, and revenue growth as potential proxies. Also watch your impact on disruption: Are your new innovations doing to some part of the business ecosystem what, say, ride-hailing services are doing to conventional taxi companies?

You won’t find many off-the-shelf applications for augmented intelligence. They involve advanced forms of machine learning and natural language processing, plus specialized interfaces tailored to your company and industry. However, you can build bespoke augmented intelligence applications on cloud-based enterprise platforms, most of which allow modifications in open source code. Given the unstructured nature of your most critical decision processes, an augmented intelligence application would require voluminous historical data from your own company, along with data from the rest of your industry and related fields (such as demographics). This will help the system distinguish external factors, such as competition and economic conditions, from the impact of your own decisions.

The greatest change from augmented intelligence may be felt by senior decision makers, as the new models often give them new alternatives to consider that don’t match their past experience or gut feelings. They should be open to those alternatives, but also skeptical. AI systems are not infallible; just like any human guide, they must show consistency, explain their decisions, and counter biases, or they will lose their value.

Autonomous Intelligence

Very few autonomous intelligence systems — systems that make decisions without direct human involvement or oversight — are in widespread use today. Early examples include automated trading in the stock market (about 75 percent of Nasdaq trading is conducted autonomously) and facial recognition. In some circumstances, algorithms are better than people at identifying other people. Other early examples include robots that dispose of bombs, gather deep-sea data, maintain space stations, and perform other tasks inherently unsafe for people.

The most eagerly anticipated forms of autonomous intelligence — self-driving cars and full-fledged language translation programs — are not yet ready for general use. The closest autonomous service so far is Tencent’s messaging and social media platform WeChat, which has close to 800 million daily active users, most of them in China. The program, which was designed primarily for use on smartphones, offers relatively sophisticated voice recognition, Chinese-to-English language translation, facial recognition (including suggestions of celebrities who look like the person holding the phone), and virtual bot friends that can play guessing games. Notwithstanding their cleverness and their pioneering use of natural language processing, these are still niche applications, and still very limited by technology. Some of the most popular AI apps, for example, are small, menu- and rule-driven programs, which conduct fairly rudimentary conversations around a limited group of options.

Despite the lead time required to bring the technology further along, any business prepared to base a strategy on advanced digital technology should be thinking seriously about autonomous intelligence now. The Internet of Things will generate vast amounts of information, more than humans can reasonably interpret. In commercial aircraft, for example, so much flight data is gathered that engineers can’t process it all; thus, Boeing has announced a $7.5 million partnership with Carnegie Mellon University, along with other efforts to develop AI systems that can, for example, predict when airplanes will need maintenance. Autonomous intelligence’s greatest challenge may not be technological at all — it may be companies’ ability to build in enough transparency for people to trust these systems to act in their best interest.

First Steps

As you contemplate the introduction of artificial intelligence, articulate what mix of the three approaches works best for you.

• Are you primarily interested in upgrading your existing processes, reducing costs, and improving productivity? If so, then start with assisted intelligence, probably with a small group of services from a cloud-based provider.

• Do you seek to build your business around something new — responsive and self-driven products, or services and experiences that incorporate AI? Then pursue an augmented intelligence approach, probably with more complex AI applications resident on the cloud.

• Are you developing a genuinely new technology? Most companies will be better off primarily using someone else’s AI platforms, but if you can justify building your own, you may become one of the leaders in your market.

The transition among these forms of AI is not clean-cut; they sit on a continuum. In developing their own AI strategy, many companies begin somewhere between assisted and augmented, while expecting to move toward autonomous eventually (see Exhibit 2).

Though investments in AI may seem expensive now, the costs will decline over the next 10 years as the software becomes more commoditized. “As this technology continues to mature,” writes Daniel Eckert, a managing director in emerging technology services for PwC US, “we should see the price adhere toward a utility model and flatten out. We expect a tiered pricing model to be introduced: a free (or freemium model) for simple activities, and a premium model for discrete, business-differentiating services.”

AI is often sold on the premise that it will replace human labor at lower cost — and the effect on employment could be devastating, though no one knows for sure. Carl Benedikt Frey and Michael Osborne of Oxford University’s engineering school have calculated that AI will put 47 percent of the jobs in the U.S. at risk; a 2016 Forrester research report estimated it at 6 percent, at least by 2025. On the other hand, Baidu Research head (and deep learning pioneer) Andrew Ng recently said, “AI is the new electricity,” meaning that it will be found everywhere and create new jobs that weren’t imaginable before its appearance.

At the same time that AI threatens the loss of an almost unimaginable number of jobs, it is also a hungry, unsatisfied employer. The lack of capable talent — people skilled in deep learning technology and analytics — may well turn out to be the biggest obstacle for large companies. The greatest opportunities may thus be for independent businesspeople, including farmers like Jeff Heepke, who no longer need scale to compete with large companies, because AI has leveled the playing field.

It is still too early to say which types of companies will be the most successful in this area — and we don’t yet have an AI model to predict it for us. In the end, we cannot even say for sure that the companies that enter the field first will be the most successful. The dominant players will be those that, like Climate Corporation, Oscar W. Larson, Netflix, and many other companies large and small, have taken AI to heart as a way to become far more capable, in a far more relevant way, than they otherwise would ever be.

Reprint No. 17210

Author Profile:

  • Anand Rao is a principal with PwC US based in Boston. He is an innovation leader for PwC’s data and analytics consulting services. He holds a Ph.D. in artificial intelligence from the University of Sydney and was formerly chief research scientist at the Australian Artificial Intelligence Institute.
  • Also contributing to this article were PwC principal and assurance innovation leader Michael Baccala, PwC senior research fellow Alan Morrison, and writer Michael Fitzgerald.

What You Read: The 15 Most Popular s+b Articles of 2017

Illustration by Drawlab19/ Shutterstock

10 Principles of Strategy through Execution

by Ivan de SouzaRichard Kauffeld, and David van Oss

How to link where your company is headed with what it does best. See also “A Guide to Strategy through Execution.”

A Strategist’s Guide to Artificial Intelligence

by Anand Rao

As the conceptual side of computer science becomes practical and relevant to business, companies must decide what type of AI role they should play.

Us Versus Them: Reframing Resistance to Change

by Elizabeth Doty

How to bridge the gap between those who champion transformation and those who challenge it.

Building Trust While Cutting Costs

by Vinay CoutoDeniz Caglar, and John Plansky

During a restructuring, rumors spread and fear takes hold. You can reduce the turmoil by finding ways to inform, empower, and inspire employees.

How Leaders Can Improve their Thinking Agility

by Jesse Sostrin

Not all thinking is created equal. Here’s how to get the most out of your brain when it really counts.

Are We on the Verge of a New Golden Age?

by Carlota PerezLeo Johnson, and Art Kleiner

A long-wave theory of technological and economic change suggests the financial malaise that began in 2007 may be about to end.

Getting to the Critical Few Behaviors that Can Drive Cultural Change

by Kristy Hull

Encouraging a small number of actions can help an organization achieve its strategic and operational objectives.

10 Principles for Leading the Next Industrial Revolution

by Norbert Schwieters and Bob Moritz

Tools and techniques to ensure your company will stand out in the new age of digitization. See also “A Guide to Leading the Next Industrial Revolution.”

Asking the Right Questions Can Frame a Successful Transformation

by Tom Puthiyamadam

When plotting strategy, leaders should worry less about solutions and more about identifying the precise problem they are trying to solve.

The New Class of Digital Leaders

by Pierre PeladeauMathias Herzog, and Olaf Acker

Faced with organizational challenges, more and more companies are hiring an executive to manage their digital transformation.

Burn Your Rule Book and Unlock the Power of Principles

by Eric J. McNulty

Clear and simple shared objectives nurture employee energy, ideas, and commitment.

What the Body Tells Us about Leadership

by Art Kleiner

In this Thought Leader interview, social presencing theater innovators Otto Scharmer and Arawana Hayashi describe how to develop your management skills through physical awareness.

Want to Kill Your Performance Rankings? Here’s How to Ensure Success

by David Rock and Beth Jones

Employee engagement rises when frequent, informal conversations replace annual reviews.

Will Stronger Borders Weaken Innovation?

by Barry JaruzelskiVolker Staack, and Robert Chwalik

The flow of talent, investment, and ideas that has boosted companies’ global R&D efforts may soon be impeded by the rise of economic nationalism.

The Caring Leader

by Augusto Giacoman

There’s a hard-headed business case for expressing concern about employees’ well-being.

 

The AI Debate We Need, by SAMI MAHROUM

Rapid advances in artificial intelligence and related technologies have contributed to fears of widespread job losses and social disruptions in the coming years, giving a sense of urgency to debates about the future of work. But such discussions, though surely worth having, only scratch the surface of what an AI society might look like.

BARCELONA – One can hardly go a day without hearing about a new study describing the far-reaching implications of advances in artificial intelligence. According to countless consultancies, think tanks, and Silicon Valley celebrities, AI applications are poised to change our lives in ways we can scarcely imagine.

The biggest change concerns employment. There is widespread speculation about how many jobs will soon fall victim to automation, but most forecasters agree that it will be in the millions. And it is not just blue-collar jobs that are at stake. So, too, are high-skilled white-collar professions, including law, accounting, and medicine. Entire industries could be disrupted or decimated, and traditional institutions such as universities might have to downsize or close.

Such concerns are understandable. In the current political economy, jobs are the main vehicle for wealth creation and income distribution. When people have jobs, they have the means to consume, which drives production forward. It is not surprising that debates about AI would center on the prospect of mass unemployment, and on the forms of compensation that could become necessary in the future.2

But, to understand better what AI will mean for our shared economic future, we should look past the headlines. We can start with insights from Project Syndicate commentators, who assess AI’s economic implications by situating the current technological revolution in a larger historical context. Their analyses suggest that AI will indeed reshape employment across advanced and developing economies alike, but also that the future of work will be but one small part of a much larger story.

FROM EACH AI ACCORDING TO ITS ABILITIES…

For Nobel laureate economist Christopher Pissarides and Jacques Bughin of the McKinsey Global Institute, the AI revolution need not “conjure gloom-and-doom scenarios about the future of work,” so long as governments rise to the challenge of equipping workers “with the right skills” to prepare them for future market needs. Pissarides and Bughin remind us that job displacement from new technologies is nothing new, and often comes in waves. “But throughout that process,” they note, “productivity gains have been reinvested to create new innovations, jobs, and industries, driving economic growth as older, less productive jobs are replaced with more advanced occupations.”

SAP CEO Bill McDermott is similarly optimistic, and sees “nothing to be gained from fearing a dystopian future that we have the power to prevent.” Rather than rendering humans obsolete, McDermott believes that AI applications could liberate millions of people from “the dangerous and repetitive tasks often associated with manual labor.” And he points to the introduction of “collaborative robots” to show that “partnership, not rivalry” will define our future relationship with AI technologies across all sectors. But, as McDermott is quick to point out, this worker-machine dynamic will not come about on its own. “Those leading the change” must not lose sight of the “human element,” or the fact that “there are some things even the smartest machines will never do.”

But, as Laura Tyson of the University of California, Berkeley, warns, the design of new “smart machines” is less important than “the policies surrounding them.” Tyson notes that technological change has, in fact, already been displacing workers for three decades, accounting for an estimated 80% of the job losses in US manufacturing. In her view, we could be heading for a “‘good-jobless future,’ in which a growing number of workers can no longer earn a middle-class income, regardless of their education and skills.” To minimize that risk, she calls on policymakers in advanced economies to “focus on measures that help those who are displaced, such as education and training programs, and income support and social safety nets, including wage insurance, lifetime retraining loans, and portable health and pension benefits.”

Alongside those welcoming or worrying about AI are others who consider current warnings to be premature. For example, Tyson’s University of California, Berkeley, colleague J. Bradford DeLong believes that, “it is profoundly unhelpful to stoke fears about robots, and to frame the issue as ‘artificial intelligence taking American jobs.’” Taking a long historical view, DeLong argues that there have been “relatively few cases in which technological progress, occurring within the context of a market economy, has directly impoverished unskilled workers.” Still, like Tyson, he notes that “workers must be educated and trained to use increasingly high-tech tools,” and that redistributive policies will be needed to “maintain a proper distribution of income.”

THE OPTIONS ON THE TABLE

Containing income inequality is in fact one of the primary challenges of the digital age. One possible remedy is a tax on robots, an idea first proposed by Mady Delvaux of the European Parliament and later endorsed by Microsoft founder Bill Gates. Nobel laureate economist Robert Shiller observes that while the idea has drawn derision in many circles, it deserves an airing, because there are undeniable “externalities to robotization that justify some government intervention.” Moreover, there aren’t any obvious alternatives, given that “a more progressive income tax and a ‘basic income’” lack “widespread popular support.”

But Yanis Varoufakis of the University of Athens sees another solution: “a universal basic dividend (UBD), financed from the returns on all capital.” Under Varoufakis’s scheme, the pace of automation and rising corporate profitability would pose no threat to social stability, because society itself would become “a shareholder in every corporation, and the dividends [would be] distributed evenly to all citizens.” At a minimum, Varoufakis contends, a UBD would help citizens recoup or replace some of the income lost to automation.

Similarly, Kaushik Basu of Cornell University thinks there should be a larger focus “on expanding profit-sharing arrangements, without stifling or centralizing market incentives that are crucial to drive growth.” Practically speaking, managing the rise of new tech monopolies that enjoy unjustifiable “returns to scale” would require giving “all of a country’s residents the right to a certain share of the economy’s profits.” At the same time, it will mean replacing “traditional anti-monopoly laws with legislation mandating a wider dispersal of shareholding within each company.”

Another option, notes Stephen Groff of the Asian Development Bank, is to direct workers toward fields that will not necessary fall prey to automation. For example, “governments should offer subsidies or tax incentives to companies that invest in the skills that humans master better than machines, such as communication and negotiation.” Another idea, notes Kemal Derviş of the Brookings Institution, is a “job mortgage,” whereby firms “with a future need for certain skills would become a kind of sponsor, involving potential future job offers, to a person willing to acquire those skills.”

And at a more fundamental level, argues Andrew Wachtel, the president of the American University of Central Asia, we should be preparing people for an AI future by teaching “skills that make humans human.” The workers of tomorrow, he notes, “will require training in ethics, to help them navigate a world in which the value of human beings can no longer be taken for granted.”

STEPPING OFF THE TREADMILL

And yet, as useful as these ideas are, they do not address a fundamental question of the digital age: Why do we still need jobs? After all, if AI technologies can deliver most of the goods and services that we need at less cost, why should we spend our precious time laboring? The impulse to preserve traditional employment is an artifact of the industrial age, when the work-to-consume dynamic drove growth. But now that capital growth is outpacing job growth, that model is breaking down.

Capital, land, and labor were the three pillars of the industrial age. But digitalization and the so-called platform economy have devalorized land, and the AI revolution now threatens to render much labor obsolete. The question for a fully automated future, then, is whether jobs can be delinked from incomes, and incomes delinked from consumption. If not, then we could be headed for what Robert Skidelsky of Warwick University describes as “a world in which we are condemned to race with machines to produce ever-larger quantities of consumption goods.”

Fortunately, the AI revolution holds out the promise of an alternative future. As Adair Turner of the Institute for New Economic Thinking points out, it is not hard to imagine “a world in which solar-powered robots, manufactured by robots and controlled by artificial intelligence systems, deliver most of the goods and services that support human welfare.” At the same time, the social theorist Jeremy Rifkin, in The Zero Marginal Cost Society, shows how shared platforms could produce countless new goods and services, and how new business models might emerge to monetize those platforms, all at no cost to consumers.

If this sounds farfetched, consider that it is already happening. Billions of people around the world now use platforms such as Facebook, WhatsApp, and Wikipedia for free. As DeLong notes, “More than ever before, we are producing commodities that contribute to social welfare through use value rather than market value.” And people are spending increasingly more time “interacting with information-technology systems where the revenue flow is, at most, a tiny trickle tied to ancillary advertising.”

As it advances, AI could allow us to consume ever more products and services from an expanding “freemium” economy based on network effects and “collective intelligence,” not unlike an open-source community. At the same time, agents in a parallel premium economy will continue to mine AI-based systems to extract new value. In an advanced AI economy, fewer people would hold traditional jobs, governments would collect less in taxes, and countries would have smaller GDPs; yet everyone would be better off, free to consume a widening range of goods that have been decoupled from income.

THE END OF EMPLOYMENT

In such a scenario, a job would become a luxury or hobby rather than a necessity. Those looking for more income would most likely have opportunities to do so through data mining, in the same way that cryptocurrency miners do today. But, because such income would be useful only for purchasing products and services that have resisted AI production, trading would be consigned to niche markets operated through blockchain networks. As Maciej Kuziemski of the University of Oxford outs it, AI will not just “change human life,” but will also alter “the boundaries and meaning of being human,” beginning with our self-conception as laboring beings.

Again, this may sound farfetched, or even utopian; but it is a more realistic depiction of the future than what one hears in current debates about preserving industrial-era economic frameworks. For example, plenty of people – not least rent-seeking owners of capital – already do not make a living from selling their labor. In an AI society, we could expect to see the protestant work ethic described by Max Weber gradually become an anachronism. Work would give way to higher forms of human activity, as the German philosopher Josef Pieper envisioned. “The modern person works in order to live, and lives in order to work,” Pieper observed more than 70 years ago. “But, for the ancient and medieval philosopher, this was a strange way to view the world; rather, they would say that we work in order to have leisure.”

In an AI economy, individuals might earn “income” from their data when they partake in physical recreation; make “green” consumption choices; or share stories, pictures, or videos. All of these activities already reap rewards through various apps today. But Princeton University’s Harold James believes the replacement of work with new forms of leisure poses significant hazards. In particular, James worries that AI’s cooptation of most mental labor will usher in a “stupid economy,” defined by atrophying cognitive skills, just as technologies that replaced manual labor gave rise to sedentary lifestyles and expanded waistlines.

In my view, however, there is no reason to think that the technologies of the future will not provide even more opportunities for people to live smarter and more creatively. After all, reaping the full benefits of AI will itself require acts of imagination. Moreover, millions of people will be freed up to perform social work for which robots are unsuited, such as caring for children, the sick, the elderly, and other vulnerable communities. And millions more will engage in leisure-work activities in the freemium economy, where data will be the new “natural resource” par excellence.

MAKING WORKLESSNESS WORK

Still, realizing this vision of the future is far from guaranteed. It will require not just new economic models, but also new forms of governance and legal frameworks. For example, Kuziemski argues that, “Empowering all people in the age of AI will require each individual – not major companies – to own the data they create.” Hernando de Soto of the Institute of Liberty and Democracy adds the corollary that ensuring equal access to data for all people will be no less important.

Such imperatives highlight the fundamental ethical questions surrounding the AI revolution. Ultimately, the regulatory and institutional systems that we create to manage the new technologies will reflect our most deeply held values. But that means regulations and institutions might evolve differently from country to country. This worries Guy Verhofstadt of the Alliance of Liberals and Democrats for Europe Group (ALDE) in the European Parliament, who urges his fellow Europeans to start setting standards for AI now, before governments with fewer concerns about privacy and safety do so first.

With respect to safety, University of Connecticut philosopher Susan Leigh Anderson argues that machines should be permitted “to function autonomously only in areas where there is agreement among ethicists about what constitutes acceptable behavior.” More broadly, she cautions those developing ethical operating protocols for AI technologies that “ethics is a long-studied field within philosophy,” one that “goes far beyond laypersons’ intuitions.”

Underscoring that point, Princeton University’s Peter Singer lists various ethical dilemmas that are already confronting AI developers, and which have no clear solution. For example, he wonders whether driverless cars “should be programmed to swerve to avoid hitting a child running across the road, even if that will put their passengers at risk.” Singer warns against thinking of AI as merely a machine that can beat a human in chess or go. “It is one thing to unleash AI in the context of a game with specific rules and a clear goal,” he writes. “It is something very different to release AI into the real world, where the unpredictability of the environment may reveal a software error that has disastrous consequences.”

The potential for AI to provoke a backlash will be particularly acute in public services, where robots might manage our personal records or interact with children, the elderly, the sick, or socially marginalized groups. As Simon Johnson and Jonathan Ruane of MIT Sloan remind us, “what is simple for us is hard for even the most sophisticated AI; conversely, AI often can do easily what we regard as difficult.” The challenge, then, will be to determine – and not only on safety grounds – where and when AI should and should not be deployed.

Furthermore, democracies, in particular, will need to establish frameworks for holding those in charge of AI applications accountable. Given AI’s high-tech nature, governments will most likely have to rely on third-party designers and developers to administer public-service applications, which could pose risks to the democratic process. But the University of Oxford ethicist Luciano Floridi fears the opposite scenario, in which “AI is no longer controlled by a guild of technicians and managers,” and has been made “available to billions of people on their smartphones or some other device.”

A BROAD AGENDA

At the end of the day, policymakers setting a course for the future must focus on ensuring a smooth passage into an AI-enabled freemium economy, rather than trying to delay or sabotage the inevitable. They should follow the example of policy interventions in earlier periods of automation. As New York University’s Nouriel Roubini reminds us, “late nineteenth- and early twentieth-century leaders” sought to “minimize the worst features of industrialization.” Accordingly, child labor was abolished, working hours were gradually reduced, and “a social safety net was put in place to protect vulnerable workers and stabilize the (often fragile) macroeconomy.”

A more recent success has been “green” policies that give rise to new business models. Such policies include feed-in tariffs, carbon credits, carbon trading, and Japan’s “Top Runner” program. When thinking about the freemium economy, governments should consider introducing automation offsets, whereby businesses that adopt labor-replacing technologies must also introduce a corresponding share of freemium goods and services into the market.

More broadly, policy approaches to education, skills training, employment, and income distribution should all now assume a post-AI perspective. As Floridi notes, this will require us to question some of our most deeply held convictions. “A world where autonomous AI systems can predict and manipulate our choices,” he observes, “will force us to rethink the meaning of freedom.” Similarly, we will also have to rethink the meaning and purpose of education, skills, jobs, and wages.

Moreover, we will have to re-conceptualize economic value for a context in which most things are free, and spending of any kind is a luxury. We will have to decide on appropriate forms of capital ownership under such conditions. And we will have to create new incentives for people to contribute to society.

All of this will require new forms of proprietary rights, new modes of governance, and new business models. In other words, it will require an entirely new socioeconomic system, one that we will either start shaping or allow to shape us.

Source and author: SAMI MAHROUM, Writing for PS since 2012

Sami Mahroum is Director of the Innovation & Policy Initiative at INSEAD and a member of the WEF Regional Strategy Group for the Middle East and North Africa. He is the author of Black Swan Start-ups: Understanding the Rise of Successful Technology Business in Unlikely Places.

Published Project Syndicate, Feb 16, 2018 

Competitive Intelligence: Why You Shouldn’t Be Afraid of Artificial Intelligence by Lili Cheng

Alexander the friendly robot visits the Indoor Park to interact with children by telling classic fairy tales, singing and dancing at Westfield London on August 10, 2016 in London, England.

Alexander the friendly robot visits the Indoor Park to interact with children by telling classic fairy tales, singing and dancing at Westfield London on August 10, 2016 in London, England.
Jeff Spicer—Getty Images

 

Artificial intelligence is one of the hottest, least understood and most debated technological breakthroughs in modern times. In many ways, the magic of AI is that it’s not something you can see or touch. You may not even realize you are using it today. When your Nest thermostat knows how to set the right temperature at home or when your phone automatically corrects your grammar or when a Tesla car navigates a road autonomously–that’s AI at work.

For most of our lives, people have had to adapt to technology. To find a file on a computer, we input a command on a keyboard attached to one particular machine. To make a phone call, we tap an assortment of numbers on a keypad. To get a piece of information, we type a specific set of keywords into a search engine.

AI is turning that dynamic on its head by creating technologies that adapt to us rather than the other way around–new ways of interacting with computers that won’t seem like computing at all.

Computer scientists have been working on AI technologies for decades, and we’re now seeing that work bear fruit. Recent breakthroughs, based on computers’ ability to understand speech and language, and have vision, have given rise to our technology “alter ego”–a personal guide that knows your habits and communication preferences, and helps you schedule your time, motivate your team to do their best work, or be, say, a better parent. Those same achievements have divided leading voices inside the world of technology about the potential pitfalls that may accompany this progress.

Core to the work I do on conversational AI is how we model language–not only inspired by technical advances, but also by insight from our best and brightest thinkers on the way people use words. To do so, we revisit ideas in books, such as Steven Pinker’s The Stuff of Thought, that give us closer looks at the complexity of human language, which combines logical rules with the unpredictability of human passion.

Humanity’s most important moments are often those risky interactions where emotion comes into play–like a date or a business negotiation–and people use vague, ambiguous language to take social risks. AI that understands language needs to combine the logical and unpredictable ways people interact. This likely means AI needs to recognize when people are more effective on their own–when to get out of the way, when not to help, when not to record, when not to interrupt or distract.

The advances that AI is bringing to our world have been a half-century in the making. But AI’s time is now. Because of the vast amounts of data in our world, only the almost limitless computing power of the cloud can make sense of it. AI can truly help solve some of the world’s most vexing problems, from improving day-to-day communication to energy, climate, health care, transportation and more. The real magic of AI, in the end, won’t be magic at all. It will be technology that adapts to people. This will be profoundly transformational for humans and for humanity.

Source: TIME, January 4, 2018 , Cheng is a corporate vice president of Microsoft AI & Research

Competitive Intelligence: Artificial Intelligence for the Real World, by Thomas H. Davenport and Rajeev Ronanki

In 2013, the MD Anderson Cancer Center launched a “moon shot” project: diagnose and recommend treatment plans for certain forms of cancer using IBM’s Watson cognitive system.

But in 2017, the project was put on hold after costs topped $62 million—and the system had yet to be used on patients. At the same time, the cancer center’s IT group was experimenting with using cognitive technologies to do much less ambitious jobs, such as making hotel and restaurant recommendations for patients’ families, determining which patients needed help paying bills, and addressing staff IT problems. The results of these projects have been much more promising: The new systems have contributed to increased patient satisfaction, improved financial performance, and a decline in time spent on tedious data entry by the hospital’s care managers.

Despite the setback on the moon shot, MD Anderson remains committed to using cognitive technology—that is, next-generation artificial intelligence—to enhance cancer treatment, and is currently developing a variety of new projects at its center of competency for cognitive computing.

The contrast between the two approaches is relevant to anyone planning AI initiatives. Our survey of 250 executives who are familiar with their companies’ use of cognitive technology shows that three-quarters of them believe that AI will substantially transform their companies within three years.

However, our study of 152 projects in almost as many companies also reveals that highly ambitious moon shots are less likely to be successful than “low-hanging fruit” projects that enhance business processes.

This shouldn’t be surprising—such has been the case with the great majority of new technologies that companies have adopted in the past. But the hype surrounding artificial intelligence has been especially powerful, and some organizations have been seduced by it.

In this article, we’ll look at the various categories of AI being employed and provide a framework for how companies should begin to build up their cognitive capabilities in the next several years to achieve their business objectives.

Three Types of AI

It is useful for companies to look at AI through the lens of business capabilities rather than technologies. Broadly speaking, AI can support three important business needs: automating business processes, gaining insight through data analysis, and engaging with customers and employees.

Process automation.

Of the 152 projects we studied, the most common type was the automation of digital and physical tasks—typically back-office administrative and financial activities—using robotic process automation technologies.

RPA is more advanced than earlier business-process automation tools, because the “robots” (that is, code on a server) act like a human inputting and consuming information from multiple IT systems. Tasks include:

  • transferring data from e-mail and call center systems into systems of record—for example, updating customer files with address changes or service additions;
  • replacing lost credit or ATM cards, reaching into multiple systems to update records and handle customer communications;
  • reconciling failures to charge for services across billing systems by extracting information from multiple document types; and
  • “reading” legal and contractual documents to extract provisions using natural language processing.

RPA is the least expensive and easiest to implement of the cognitive technologies we’ll discuss here, and typically brings a quick and high return on investment. (It’s also the least “smart” in the sense that these applications aren’t programmed to learn and improve, though developers are slowly adding more intelligence and learning capability.) It is particularly well suited to working across multiple back-end systems.

At NASA, cost pressures led the agency to launch four RPA pilots in accounts payable and receivable, IT spending, and human resources—all managed by a shared services center. The four projects worked well—in the HR application, for example, 86% of transactions were completed without human intervention—and are being rolled out across the organization. NASA is now implementing more RPA bots, some with higher levels of intelligence. As Jim Walker, project leader for the shared services organization notes, “So far it’s not rocket science.”

One might imagine that robotic process automation would quickly put people out of work. But across the 71 RPA projects we reviewed (47% of the total), replacing administrative employees was neither the primary objective nor a common outcome. Only a few projects led to reductions in head count, and in most cases, the tasks in question had already been shifted to outsourced workers. As technology improves, robotic automation projects are likely to lead to some job losses in the future, particularly in the offshore business-process outsourcing industry. If you can outsource a task, you can probably automate it.

Cognitive insight.

The second most common type of project in our study (38% of the total) used algorithms to detect patterns in vast volumes of data and interpret their meaning. Think of it as “analytics on steroids.” These machine-learning applications are being used to:

  • predict what a particular customer is likely to buy;
  • identify credit fraud in real time and detect insurance claims fraud;
  • analyze warranty data to identify safety or quality problems in automobiles and other manufactured products;
  • automate personalized targeting of digital ads; and
  • provide insurers with more-accurate and detailed actuarial modeling.

Cognitive insights provided by machine learning differ from those available from traditional analytics in three ways: They are usually much more data-intensive and detailed, the models typically are trained on some part of the data set, and the models get better—that is, their ability to use new data to make predictions or put things into categories improves over time.

Versions of machine learning (deep learning, in particular, which attempts to mimic the activity in the human brain in order to recognize patterns) can perform feats such as recognizing images and speech.

Machine learning can also make available new data for better analytics. While the activity of data curation has historically been quite labor-intensive, now machine learning can identify probabilistic matches—data that is likely to be associated with the same person or company but that appears in slightly different formats—across databases.

GE has used this technology to integrate supplier data and has saved $80 million in its first year by eliminating redundancies and negotiating contracts that were previously managed at the business unit level.

Similarly, a large bank used this technology to extract data on terms from supplier contracts and match it with invoice numbers, identifying tens of millions of dollars in products and services not supplied.

Deloitte’s audit practice is using cognitive insight to extract terms from contracts, which enables an audit to address a much higher proportion of documents, often 100%, without human auditors’ having to painstakingly read through them.

Cognitive insight applications are typically used to improve performance on jobs only machines can do—tasks such as programmatic ad buying that involve such high-speed data crunching and automation that they’ve long been beyond human ability—so they’re not generally a threat to human jobs.

Cognitive engagement.

Projects that engage employees and customers using natural language processing chatbots, intelligent agents, and machine learning were the least common type in our study (accounting for 16% of the total). This category includes:

  • intelligent agents that offer 24/7 customer service addressing a broad and growing array of issues from password requests to technical support questions—all in the customer’s natural language;
  • internal sites for answering employee questions on topics including IT, employee benefits, and HR policy;
  • product and service recommendation systems for retailers that increase personalization, engagement, and sales—typically including rich language or images; and
  • health treatment recommendation systems that help providers create customized care plans that take into account individual patients’ health status and previous treatments.

The companies in our study tended to use cognitive engagement technologies more to interact with employees than with customers. That may change as firms become more comfortable turning customer interactions over to machines.

Vanguard, for example, is piloting an intelligent agent that helps its customer service staff answer frequently asked questions. The plan is to eventually allow customers to engage with the cognitive agent directly, rather than with the human customer-service agents.

SEBank, in Sweden, and the medical technology giant Becton, Dickinson, in the United States, are using the lifelike intelligent-agent avatar Amelia to serve as an internal employee help desk for IT support. SEBank has recently made Amelia available to customers on a limited basis in order to test its performance and customer response.

R1801H_DAVENPORT_BENEFITS.png

Companies tend to take a conservative approach to customer-facing cognitive engagement technologies largely because of their immaturity. Facebook, for example, found that its Messenger chatbots couldn’t answer 70% of customer requests without human intervention. As a result, Facebook and several other firms are restricting bot-based interfaces to certain topic domains or conversation types.

Our research suggests that cognitive engagement apps are not currently threatening customer service or sales rep jobs. In most of the projects we studied, the goal was not to reduce head count but to handle growing numbers of employee and customer interactions without adding staff.

Some organizations were planning to hand over routine communications to machines, while transitioning customer-support personnel to more-complex activities such as handling customer issues that escalate, conducting extended unstructured dialogues, or reaching out to customers before they call in with problems.

As companies become more familiar with cognitive tools, they are experimenting with projects that combine elements from all three categories to reap the benefits of AI. An Italian insurer, for example, developed a “cognitive help desk” within its IT organization. The system engages with employees using deep-learning technology (part of the cognitive insights category) to search frequently asked questions and answers, previously resolved cases, and documentation to come up with solutions to employees’ problems. It uses a smart-routing capability (business process automation) to forward the most complex problems to human representatives, and it uses natural language processing to support user requests in Italian.

Despite their rapidly expanding experience with cognitive tools, however, companies face significant obstacles in development and implementation. On the basis of our research, we’ve developed a four-step framework for integrating AI technologies that can help companies achieve their objectives, whether the projects are moon shoots or business-process enhancements.

1. Understanding The Technologies

Before embarking on an AI initiative, companies must understand which technologies perform what types of tasks, and the strengths and limitations of each. Rule-based expert systems and robotic process automation, for example, are transparent in how they do their work, but neither is capable of learning and improving.

Deep learning, on the other hand, is great at learning from large volumes of labeled data, but it’s almost impossible to understand how it creates the models it does. This “black box” issue can be problematic in highly regulated industries such as financial services, in which regulators insist on knowing why decisions are made in a certain way.

We encountered several organizations that wasted time and money pursuing the wrong technology for the job at hand. But if they’re armed with a good understanding of the different technologies, companies are better positioned to determine which might best address specific needs, which vendors to work with, and how quickly a system can be implemented. Acquiring this understanding requires ongoing research and education, usually within IT or an innovation group.

R1801H_DAVENPORT_CHALLENGES.png

In particular, companies will need to leverage the capabilities of key employees, such as data scientists, who have the statistical and big-data skills necessary to learn the nuts and bolts of these technologies. A main success factor is your people’s willingness to learn. Some will leap at the opportunity, while others will want to stick with tools they’re familiar with. Strive to have a high percentage of the former.

If you don’t have data science or analytics capabilities in-house, you’ll probably have to build an ecosystem of external service providers in the near term. If you expect to be implementing longer-term AI projects, you will want to recruit expert in-house talent. Either way, having the right capabilities is essential to progress.

Given the scarcity of cognitive technology talent, most organizations should establish a pool of resources—perhaps in a centralized function such as IT or strategy—and make experts available to high-priority projects throughout the organization. As needs and talent proliferate, it may make sense to dedicate groups to particular business functions or units, but even then a central coordinating function can be useful in managing projects and careers.

2. Creating a Portfolio of Projects

The next step in launching an AI program is to systematically evaluate needs and capabilities and then develop a prioritized portfolio of projects. In the companies we studied, this was usually done in workshops or through small consulting engagements. We recommend that companies conduct assessments in three broad areas.

Identifying the opportunities.

The first assessment determines which areas of the business could benefit most from cognitive applications. Typically, they are parts of the company where “knowledge”—insight derived from data analysis or a collection of texts—is at a premium but for some reason is not available.

  • Bottlenecks. In some cases, the lack of cognitive insights is caused by a bottleneck in the flow of information; knowledge exists in the organization, but it is not optimally distributed. That’s often the case in health care, for example, where knowledge tends to be siloed within practices, departments, or academic medical centers.
  • Scaling challenges. In other cases, knowledge exists, but the process for using it takes too long or is expensive to scale. Such is often the case with knowledge developed by financial advisers. That’s why many investment and wealth management firms now offer AI-supported “robo-advice” capabilities that provide clients with cost-effective guidance for routine financial issues.
  • In the pharmaceutical industry, Pfizer is tackling the scaling problem by using IBM’s Watson to accelerate the laborious process of drug-discovery research in immuno-oncology, an emerging approach to cancer treatment that uses the body’s immune system to help fight cancer. Immuno-oncology drugs can take up to 12 years to bring to market. By combining a sweeping literature review with Pfizer’s own data, such as lab reports, Watson is helping researchers to surface relationships and find hidden patterns that should speed the identification of new drug targets, combination therapies for study, and patient selection strategies for this new class of drugs.
  • Inadequate firepower. Finally, a company may collect more data than its existing human or computer firepower can adequately analyze and apply. For example, a company may have massive amounts of data on consumers’ digital behavior but lack insight about what it means or how it can be strategically applied. To address this, companies are using machine learning to support tasks such as programmatic buying of personalized digital ads or, in the case of Cisco Systems and IBM, to create tens of thousands of “propensity models” for determining which customers are likely to buy which products.

Determining the use cases.

The second area of assessment evaluates the use cases in which cognitive applications would generate substantial value and contribute to business success. Start by asking key questions such as: How critical to your overall strategy is addressing the targeted problem? How difficult would it be to implement the proposed AI solution—both technically and organizationally? Would the benefits from launching the application be worth the effort? Next, prioritize the use cases according to which offer the most short- and long-term value, and which might ultimately be integrated into a broader platform or suite of cognitive capabilities to create competitive advantage.

Selecting the technology.

The third area to assess examines whether the AI tools being considered for each use case are truly up to the task. Chatbots and intelligent agents, for example, may frustrate some companies because most of them can’t yet match human problem solving beyond simple scripted cases (though they are improving rapidly). Other technologies, like robotic process automation that can streamline simple processes such as invoicing, may in fact slow down more-complex production systems. And while deep learning visual recognition systems can recognize images in photos and videos, they require lots of labeled data and may be unable to make sense of a complex visual field.

In time, cognitive technologies will transform how companies do business. Today, however, it’s wiser to take incremental steps with the currently available technology while planning for transformational change in the not-too-distant future. You may ultimately want to turn customer interactions over to bots, for example, but for now it’s probably more feasible—and sensible—to automate your internal IT help desk as a step toward the ultimate goal.

3. Launching Pilots

Because the gap between current and desired AI capabilities is not always obvious, companies should create pilot projects for cognitive applications before rolling them out across the entire enterprise.

Proof-of-concept pilots are particularly suited to initiatives that have high potential business value or allow the organization to test different technologies at the same time. Take special care to avoid “injections” of projects by senior executives who have been influenced by technology vendors. Just because executives and boards of directors may feel pressure to “do something cognitive” doesn’t mean you should bypass the rigorous piloting process. Injected projects often fail, which can significantly set back the organization’s AI program.

If your firm plans to launch several pilots, consider creating a cognitive center of excellence or similar structure to manage them. This approach helps build the needed technology skills and capabilities within the organization, while also helping to move small pilots into broader applications that will have a greater impact. Pfizer has more than 60 projects across the company that employ some form of cognitive technology; many are pilots, and some are now in production.

At Becton, Dickinson, a “global automation” function within the IT organization oversees a number of cognitive technology pilots that use intelligent digital agents and RPA (some work is done in partnership with the company’s Global Shared Services organization). The global automation group uses end-to-end process maps to guide implementation and identify automation opportunities. The group also uses graphical “heat maps” that indicate the organizational activities most amenable to AI interventions. The company has successfully implemented intelligent agents in IT support processes, but as yet is not ready to support large-scale enterprise processes, like order-to-cash. The health insurer Anthem has developed a similar centralized AI function that it calls the Cognitive Capability Office.

Business-process redesign.

As cognitive technology projects are developed, think through how workflows might be redesigned, focusing specifically on the division of labor between humans and the AI. In some cognitive projects, 80% of decisions will be made by machines and 20% will be made by humans; others will have the opposite ratio. Systematic redesign of workflows is necessary to ensure that humans and machines augment each other’s strengths and compensate for weaknesses.

The investment firm Vanguard, for example, has a new “Personal Advisor Services” (PAS) offering, which combines automated investment advice with guidance from human advisers. In the new system, cognitive technology is used to perform many of the traditional tasks of investment advising, including constructing a customized portfolio, rebalancing portfolios over time, tax loss harvesting, and tax-efficient investment selection.

Vanguard’s human advisers serve as “investing coaches,” tasked with answering investor questions, encouraging healthy financial behaviors, and being, in Vanguard’s words, “emotional circuit breakers” to keep investors on plan. Advisers are encouraged to learn about behavioral finance to perform these roles effectively. The PAS approach has quickly gathered more than $80 billion in assets under management, costs are lower than those for purely human-based advising, and customer satisfaction is high.

Vanguard understood the importance of work redesign when implementing PAS, but many companies simply “pave the cow path” by automating existing work processes, particularly when using RPA technology. By automating established workflows, companies can quickly implement projects and achieve ROI—but they forgo the opportunity to take full advantage of AI capabilities and substantively improve the process.

Cognitive work redesign efforts often benefit from applying design-thinking principles: understanding customer or end-user needs, involving employees whose work will be restructured, treating designs as experimental “first drafts,” considering multiple alternatives, and explicitly considering cognitive technology capabilities in the design process. Most cognitive projects are also suited to iterative, agile approaches to development.

4. Scaling Up

Many organizations have successfully launched cognitive pilots, but they haven’t had as much success rolling them out organization-wide. To achieve their goals, companies need detailed plans for scaling up, which requires collaboration between technology experts and owners of the business process being automated. Because cognitive technologies typically support individual tasks rather than entire processes, scale-up almost always requires integration with existing systems and processes. Indeed, in our survey, executives reported that such integration was the greatest challenge they faced in AI initiatives.

Companies should begin the scaling-up process by considering whether the required integration is even possible or feasible. If the application depends on special technology that is difficult to source, for example, that will limit scale-up. Make sure your business process owners discuss scaling considerations with the IT organization before or during the pilot phase: An end run around IT is unlikely to be successful, even for relatively simple technologies like RPA.

The health insurer Anthem, for example, is taking on the development of cognitive technologies as part of a major modernization of its existing systems. Rather than bolting new cognitive apps onto legacy technology, Anthem is using a holistic approach that maximizes the value being generated by the cognitive applications, reduces the overall cost of development and integration, and creates a halo effect on legacy systems. The company is also redesigning processes at the same time to, as CIO Tom Miller puts it, “use cognitive to move us to the next level.”

In scaling up, companies may face substantial change-management challenges. At one U.S. apparel retail chain, for example, the pilot project at a small subset of stores used machine learning for online product recommendations, predictions for optimal inventory and rapid replenishment models, and—most difficult of all—merchandising. Buyers, used to ordering product on the basis of their intuition, felt threatened and made comments like “If you’re going to trust this, what do you need me for?” After the pilot, the buyers went as a group to the chief merchandising officer and requested that the program be killed. The executive pointed out that the results were positive and warranted expanding the project. He assured the buyers that, freed of certain merchandising tasks, they could take on more high-value work that humans can still do better than machines, such as understanding younger customers’ desires and determining apparel manufacturers’ future plans. At the same time, he acknowledged that the merchandisers needed to be educated about a new way of working.

If scale-up is to achieve the desired results, firms must also focus on improving productivity. Many, for example, plan to grow their way into productivity—adding customers and transactions without adding staff. Companies that cite head count reduction as the primary justification for the AI investment should ideally plan to realize that goal over time through attrition or from the elimination of outsourcing.

The Future Cognitive Company

Our survey and interviews suggest that managers experienced with cognitive technology are bullish on its prospects. Although the early successes are relatively modest, we anticipate that these technologies will eventually transform work. We believe that companies that are adopting AI in moderation now—and have aggressive implementation plans for the future—will find themselves as well positioned to reap benefits as those that embraced analytics early on.

Through the application of AI, information-intensive domains such as marketing, health care, financial services, education, and professional services could become simultaneously more valuable and less expensive to society. Business drudgery in every industry and function—overseeing routine transactions, repeatedly answering the same questions, and extracting data from endless documents—could become the province of machines, freeing up human workers to be more productive and creative. Cognitive technologies are also a catalyst for making other data-intensive technologies succeed, including autonomous vehicles, the Internet of Things, and mobile and multichannel consumer technologies.

The great fear about cognitive technologies is that they will put masses of people out of work. Of course, some job loss is likely as smart machines take over certain tasks traditionally done by humans. However, we believe that most workers have little to fear at this point. Cognitive systems perform tasks, not entire jobs. The human job losses we’ve seen were primarily due to attrition of workers who were not replaced or through automation of outsourced work. Most cognitive tasks currently being performed augment human activity, perform a narrow task within a much broader job, or do work that wasn’t done by humans in the first place, such as big-data analytics.

Most managers with whom we discuss the issue of job loss are committed to an augmentation strategy—that is, integrating human and machine work, rather than replacing humans entirely. In our survey, only 22% of executives indicated that they considered reducing head count as a primary benefit of AI.

We believe that every large company should be exploring cognitive technologies. There will be some bumps in the road, and there is no room for complacency on issues of workforce displacement and the ethics of smart machines. But with the right planning and development, cognitive technology could usher in a golden age of productivity, work satisfaction, and prosperity.

A version of this article appeared in the January–February 2018 issue (pp.108–116) of Harvard Business Review.

Thomas H. Davenport is the President’s Distinguished Professor in Management and Information Technology at Babson College, a research fellow at the MIT Initiative on the Digital Economy, and a senior adviser at Deloitte Analytics. Author of over a dozen management books, his latest is Only Humans Need Apply: Winners and Losers in the Age of Smart Machines


Rajeev Ronanki is a principal at Deloitte Consulting, where he leads the cognitive computing and health care innovation practices. Some of the companies mentioned in this article are Deloitte clients.

Source: Harvard Business Review, FROM THE JANUARY–FEBRUARY 2018 ISSUE. 

Inteligência Competitiva Tecnológica: McKinsey’s State Of Machine Learning And AI, 2017

These and other findings are from the McKinsey Global Institute Study, and discussion paper, Artificial Intelligence, The Next Digital Frontier (80 pp., PDF, free, no opt-in) published last month. McKinsey Global Institute published an article summarizing the findings titled   How Artificial Intelligence Can Deliver Real Value To Companies. McKinsey interviewed more than 3,000 senior executives on the use of AI technologies, their companies’ prospects for further deployment, and AI’s impact on markets, governments, and individuals.  McKinsey Analytics was also utilized in the development of this study and discussion paper.

Key takeaways from the study include the following:

  • Tech giants including Baidu and Google spent between $20B to $30B on AI in 2016, with 90% of this spent on R&D and deployment, and 10% on AI acquisitions. The current rate of AI investment is 3X the external investment growth since 2013. McKinsey found that 20% of AI-aware firms are early adopters, concentrated in the high-tech/telecom, automotive/assembly and financial services industries. The graphic below illustrates the trends the study team found during their analysis.

Source: McKinsey Global Institute, Artificial Intelligence, The Next Digital Frontier

  • AI is turning into a race for patents and intellectual property (IP) among the world’s leading tech companies. McKinsey found that only a small percentage (up to 9%) of Venture Capital (VC), Private Equity (PE), and other external funding. Of all categories that have publically available data, M&A grew the fastest between 2013 And 2016 (85%).The report cites many examples of internal development including Amazon’s investments in robotics and speech recognition, and Salesforce on virtual agents and machine learning. BMW, Tesla, and Toyota lead auto manufacturers in their investments in robotics and machine learning for use in driverless cars. Toyota is planning to invest $1B in establishing a new research institute devoted to AI for robotics and driverless vehicles.

Source: McKinsey Global Institute, Artificial Intelligence, The Next Digital Frontier

  • McKinsey estimates that total annual external investment in AI was between $8B to $12B in 2016, with machine learning attracting nearly 60% of that investment. Robotics and speech recognition are two of the most popular investment areas. Investors are most favoring machine learning startups due to quickness code-based start-ups have at scaling up to include new features fast. Software-based machine learning startups are preferred over their more cost-intensive machine-based robotics counterparts that often don’t have their software counterparts do. As a result of these factors and more, Corporate M&A is soaring in this area with the Compound Annual Growth Rate (CAGR) reaching approximately 80% from 20-13 to 2016. The following graphic illustrates the distribution of external investments by category from the study.

Source: McKinsey Global Institute, Artificial Intelligence, The Next Digital Frontier

https://d-7727309669646905.ampproject.net/1502408222412/frame.html

  • High tech, telecom, and financial services are the leading early adopters of machine learning and AI. These industries are known for their willingness to invest in new technologies to gain competitive and internal process efficiencies. Many startups have also had their start by concentrating on the digital challenges of this industries as well. The MGI Digitization Index is a GDP-weighted average of Europe and the United States. See Appendix B of the study for a full list of metrics and explanation of methodology. McKinsey also created an overall AI index shown in the first column below that compares key performance indicators (KPIs) across assets, usage, and labor where AI could make a contribution. The following is a heat map showing the relative level of AI adoption by industry and key area of asset, usage, and labor category.

Source: McKinsey Global Institute, Artificial Intelligence, The Next Digital Frontier

  • McKinsey predicts High Tech, Communications, and Financial Services will be the leading industries to adopt AI in the next three years. The competition for patents and intellectual property (IP) in these three industries is accelerating. Devices, products and services available now and on the roadmaps of leading tech companies will over time reveal the level of innovative activity going on in their R&D labs today. In financial services, for example, there are clear benefits from improved accuracy and speed in AI-optimized fraud-detection systems, forecast to be a $3B market in 2020. The following graphic provides an overview of sectors or industries leading in AI addition today and who intend to grow their investments the most in the next three years.

Source: McKinsey Global Institute, Artificial Intelligence, The Next Digital Frontier

  • Healthcare, financial services, and professional services are seeing the greatest increase in their profit margins as a result of AI adoption. McKinsey found that companies who benefit from senior management support for AI initiatives have invested in infrastructure to support its scale and have clear business goals achieve 3 to 15% percentage point higher profit margin. Of the over 3,000 business leaders who were interviewed as part of the survey, the majority expect margins to increase by up to 5% points in the next year.

Source: McKinsey Global Institute, Artificial Intelligence, The Next Digital Frontier

  • Amazon has achieved impressive results from its $775 million acquisition of Kiva, a robotics company that automates picking and packing according to the McKinsey study. “Click to ship” cycle time, which ranged from 60 to 75 minutes with humans, fell to 15 minutes with Kiva, while inventory capacity increased by 50%. Operating costs fell an estimated 20%, giving a return of close to 40% on the original investment
  • Netflix has also achieved impressive results from the algorithm it uses to personalize recommendations to its 100 million subscribers worldwide. Netflix found that customers, on average, give up 90 seconds after searching for a movie. By improving search results, Netflix projects that they have avoided canceled subscriptions that would reduce its revenue by $1B annually.

Inteligência Competitiva Tecnológica: Forbes/KPMG Voice: The Great Re Write

ARTIFICIAL INTELLIGENCE IS GOING TO CHANGE HOW WE LIVE AND HOW WE INTERFACE IN OUR OWN WORLD.

LEONARD BRODY for KPMGVoice

To see the video, here

Marketers need to start giving millennials what they want: artificial intelligence

Artificial intelligence is no longer a figment of the sci-fi writer’s imagination, nor is it something to be feared – Siri made sure of that. As a generation grows up with AI in their pockets, marketers need to stop shying away from it and embrace it, writes Mailee Creacy.

Elon Musk and Mark Zuckerberg are the latest high-profile business leaders to butt heads over the use of artificial intelligence, with the two trading barbs over the regulation of AI.

Musk has issued warnings about the potential dangers of the technology while Zuckerberg believes people shouldn’t slow down progress.

Zuckerberg is a fan

They aren’t the first to have opposing opinions on the topic, with Microsoft’s Bill Gates and Professor Stephen Hawking also advising caution, while Amazon’s CEO Jeff Bezos and IBM Watson SVP, David Kelly, have encouraged the development of the technology.

While industry leaders are touting their opinions about the future of AI, the general public also have strong opinions about the way they believe it will affect their lives.

The phrase ‘artificial intelligence’ may prompt fear in some people’s minds – perhaps preconditioned by Hollywood and sci-fi dramatisationsof malevolent robots – however, most people are accepting of the use of AI in their everyday lives.

Whether it be the apps on our phone, the virtual personal assistant in our living room, or the self-driving cars we’re increasingly seeing in the news and soon on our streets, we are incrementally becoming more exposed to artificial intelligence in its varied forms.

The generation leading the AI transformation – millennials – believe there is nothing to fear from AI.

It’s only natural that sentiments towards artificial intelligence will reflect popular usage and exposure. Millennial males are typically the quickest adopters of new technology, so it follows that they’re the demographic most at ease with the concept of artificial intelligence.

Research released on consumer perceptions of AI proves that younger generations are open-minded when it comes to the use of AI. The survey showed millennial males are most likely to find artificial intelligence exciting (80%), least likely to be fearful of it (only 13%), and most likely to think that it will improve their job in the next five years (47%).

What does this mean for marketers?

For brand marketers, capturing the attention of millennials has long been the holy grail. Multiple screen and multiple channel users by nature, this generation demands more from the brands in their lives. Blanket marketing messages and one-way broadcasts don’t cut it, and advertisers reliant on the interruption-based practices of the past increasingly find themselves lacking in engagement.

This generation expects to be spoken to in a personalised way and accepts that businesses will anticipate their needs in advance. Netflix queues up TV shows they might like to binge watch. Google identifies when they should leave home in order to beat traffic. Now it seems this acceptance of technology that anticipates their needs extends to advertising.

The Consumer Perceptions of AI research proved that younger generations are open-minded when it comes to brands using artificial intelligence to inform their buying decisions. A clear majority of Australian millennials (74%) said they prefer brands to provide personalised advertising and offers.

This is great news for marketers who advertise online. It shows us that younger generations have become so accustomed to businesses using artificial intelligence to make their lives better that they also understand it may be used to present them with their ideal promotions, products and services.

They seem not only accepting, but expectant, and understand the data exchange that takes place now between brands and consumers – they provide information about themselves in return for interesting, entertaining and, most importantly, relevant content.

Mailee Creacy is Rocket Fuel’s general manager. 

Source: MAILEE CREACY, Mumbrella, August 7, 2017 2:57

A Strategist’s Guide to Artificial Intelligence by Anand Rao

Illustration by The Heads of State

Jeff Heepke knows where to plant corn on his 4,500-acre farm in Illinois because of artificial intelligence (AI). He uses a smartphone app called Climate Basic, which divides Heepke’s farmland (and, in fact, the entire continental U.S.) into plots that are 10 meters square. The app draws on local temperature and erosion records, expected precipitation, soil quality, and other agricultural data to determine how to maximize yields for each plot. If a rainy cold front is expected to pass by, Heepke knows which areas to avoid watering or irrigating that afternoon. As the U.S. Department of Agriculture noted, this use of artificial intelligence across the industry has produced the largest crops in the country’s history.

Climate Corporation, the Silicon Valley–based developer of Climate Basic, also offers a more advanced AI app that operates autonomously. If a storm hits a region, or a drought occurs, it adjusts local yield numbers downward. Farmers who have bought insurance to supplement their government coverage get a check; no questions asked, no paper filing necessary. The insurance companies and farmers both benefit from having a much less labor-intensive, more streamlined, and less expensive automated claims process.

Monsanto paid nearly US$1 billion to buy Climate Corporation in 2013, giving the company’s models added legitimacy. Since then, Monsanto has continued to upgrade the AI models, integrating data from farm equipment and sensors planted in the fields so that they improve their accuracy and insight as more data is fed into them. One result is a better understanding of climate change and its effects — for example, the northward migration of arable land for corn, or the increasing frequency of severe storms.

Applications like this are typical of the new wave of artificial intelligence in business. AI is generating new approaches to business models, operations, and the deployment of people that are likely to fundamentally change the way business operates. And if it can transform an earthbound industry like agriculture, how long will it be before your company is affected?

An Unavoidable Opportunity

Many business leaders are keenly aware of the potential value of artificial intelligence, but are not yet poised to take advantage of it. In PwC’s 2017 Digital IQ survey of senior executives worldwide, 54 percent of the respondents said they were making substantial investments in AI today. But only 20 percent said their organizations had the skills necessary to succeed with this technology (see “Winning with Digital Confidence,” by Chris Curran and Tom Puthiyamadam).

Reports on artificial intelligence tend to portray it as either a servant, making all technology more responsive, or an overlord, eliminating jobs and destroying privacy. But for business decision makers, AI is primarily an enabler of productivity. It will eliminate jobs, to be sure, but it will also fundamentally change work processes and might create jobs in the long run. The nature of decision making, collaboration, creative art, and scientific research will all be affected; so will enterprise structures. Technological systems, including potentially your products and services, as well as your office and factory equipment, will respond to people (and one another) in ways that feel as if they are coming to life.

In their book Artificial Intelligence: A Modern Approach (Pearson, 1995), Stuart Russell and Peter Norvig define AI as “the designing and building of intelligent agents that receive percepts from the environment and take actions that affect that environment.” The most critical difference between AI and general-purpose software is in the phrase “take actions.” AI enables machines to respond on their own to signals from the world at large, signals that programmers do not directly control and therefore can’t anticipate.

The fastest-growing category of AI is machine learning, or the ability of software to improve its own activity by analyzing interactions with the world at large (see “The Road to Deep Learning,” below). This technology, which has been a continual force in the history of computing since the 1940s, has grown dramatically in sophistication during the last few years.

The Road to Deep Learning

This may be the first moment in AI’s history when a majority of experts agree the technology has practical value. From its conceptual beginnings in the 1950s, led by legendary computer scientists such as Marvin Minsky and John McCarthy, its future viability has been the subject of fierce debate. As recently as 2000, the most proficient AI system was roughly comparable, in complexity, to the brain of a worm. Then, as high-bandwidth networking, cloud computing, and high-powered graphics-enabled microprocessors emerged, researchers began building multilayered neural networks — still extremely slow and limited in comparison with natural brains, but useful in practical ways.

The best-known AI triumphs — in which software systems beat expert human players in Jeopardy, chess, Go, poker, and soccer — differ from most day-to-day business applications. These games have prescribed rules and well-defined outcomes; every game ends in a win, loss, or tie. The games are also closed-loop systems: They affect only the players, not outsiders. The software can be trained through multiple failures with no serious risks. You can’t say the same of an autonomous vehicle crash, a factory failure, or a mistranslation.

There are currently two main schools of thought on how to develop the inference capabilities necessary for AI programs to navigate through the complexities of everyday life. In both, programs learn from experience — that is, the responses and reactions they get influence the way the programs act thereafter. The first approach uses conditional instructions (also known as heuristics) to accomplish this. For instance, an AI bot would interpret the emotions in a conversation by following a program that instructed it to start by checking for emotions that were evident in the recent past.

The second approach is known as machine learning. The machine is taught, using specific examples, to make inferences about the world around it. It then builds its understanding through this inference-making ability, without following specific instructions to do so. The Google search engine’s “next-word completion” feature is a good example of machine learning. Type in the word artificial, and several suggestions for the next word will appear, perhaps intelligence, selection, and insemination. No one has programmed it to seek those complements. Google chose the strategy of looking for the three words most frequently typed after artificial. With huge amounts of data available, machine learning can provide uncanny accuracy about patterns of behavior.

The type of machine learning called deep learning has become increasingly important. A deep learning system is a multilayered neural network that learns representations of the world and stores them as a nested hierarchy of concepts many layers deep. For example, when processing thousands of images, it recognizes objects based on a hierarchy of simpler building blocks: straight lines and curved lines at the basic level, then eyes, mouths, and noses, and then faces, and then specific facial features. Besides image recognition, deep learning appears to be a promising way to approach complex challenges such as speech comprehension, human–machine conversation, language translation, and vehicle navigation (see Exhibit A).

Though it is the closest machine to a human brain, a deep learning neural network is not suitable for all problems. It requires multiple processors with enormous computing power, far beyond conventional IT architecture; it will learn only by processing enormous amounts of data; and its decision processes are not transparent.

AI applications in daily use include all smartphone digital assistants, email programs that sort entries by importance, voice recognition systems, image recognition apps such as Facebook Picture Search, digital assistants such as Amazon Echo and Google Home, and much of the emerging Industrial Internet. Some AI apps are targeted at minor frustrations — DoNotPay, an online legal bot, has reversed thousands of parking tickets — and others, such as connected car and language translation technologies, represent fundamental shifts in the way people live. A growing number are aimed at improving human behavior; for instance, GM’s 2016 Chevrolet Malibu feeds data from sensors into a backseat driver–like guidance system for teenagers at the wheel.

Despite all this activity, the market for AI is still small. Market research firm Tractica estimated 2016 revenues at just $644 million. But it expects hockey stick–style growth, reaching $15 billion by 2022 and accelerating thereafter. In late 2016, there were about 1,500 AI-related startups in the U.S. alone, and total funding in 2016 reached a record $5 billion. Google, Facebook, Microsoft, Salesforce.com, and other tech companies are snapping up AI software companies, and large, established companies are recruiting deep learning talent and, like Monsanto, buying AI companies specializing in their markets. To make the most of this technology in your enterprise, consider the three main ways that businesses can or will use AI:

• Assisted intelligence, now widely available, improves what people and organizations are already doing.

• Augmented intelligence, emerging today, enables organizations and people to do things they couldn’t otherwise do.

• Autonomous intelligence, being developed for the future, creates and deploys machines that act on their own.

Many companies will make investments in all three during the next few years, drawing from a wide variety of applications (see Exhibit 1). They complement one another, but require different types of investment, different staffing considerations, and different business models.

Assisted Intelligence

Assisted intelligence amplifies the value of existing activity. For example, Google’s Gmail sorts incoming email into “Primary,” “Social,” and “Promotion” default tabs. The algorithm, trained with data from millions of other users’ emails, makes people more efficient without changing the way they use email or altering the value it provides.

Assisted intelligence tends to involve clearly defined, rules-based, repeatable tasks. These include automated assembly lines and other uses of physical robots; robotic process automation, in which software-based agents simulate the online activities of a human being; and back-office functions such as billing, finance, and regulatory compliance. This form of AI can be used to verify and cross-check data — for example, when paper checks are read and verified by a bank’s ATM. Assisted intelligence has already become common in some enterprise software processes. In “opportunity to order” (basic sales) and “order to cash” (receiving and processing customer orders), the software offers guidance and direction that was formerly available only from people.

The Oscar W. Larson Company used assisted intelligence to improve its field service operations. This is a 70-plus-year-old family-owned general contractor, which among other services to the oil and gas industry, provides maintenance and repair for point-of-sales systems and fuel dispensers at gas stations. One costly and irritating problem is “truck rerolls”: service calls that have to be rescheduled because the technician lacks the tools, parts, or expertise for a particular issue. After analyzing data on service calls, the AI software showed how to reduce truck rerolls by 20 percent, a rate that should continue to improve as the software learns to recognize more patterns.

Assisted intelligence apps often involve computer models of complex realities that allow businesses to test decisions with less risk. For example, one auto manufacturer has developed a simulation of consumer behavior, incorporating data about the types of trips people make, the ways those affect supply and demand for motor vehicles, and the variations in those patterns for different city topologies, marketing approaches, and vehicle price ranges. The model spells out more than 200,000 variations for the automaker to consider and simulates the potential success of any tested variation, thus assisting in the design of car launches. As the automaker introduces new cars and the simulator incorporates the data on outcomes from each launch, the model’s predictions will become ever more accurate.

AI-based packages of this sort are available on more and more enterprise software platforms. Success with assisted intelligence should lead to improvements in conventional business metrics such as labor productivity, revenues or margins per employee, and average time to completion for processes. Much of the cost involved is in the staff you hire, who must be skilled at marshaling and interpreting data. To evaluate where to deploy assisted intelligence, consider two questions: What products or services could you easily make more marketable if they were more automatically responsive to your customers? Which of your current processes and practices, including your decision-making practices, would be more powerful with more intelligence?

Augmented Intelligence

Augmented intelligence software lends new capability to human activity, permitting enterprises to do things they couldn’t do before. Unlike assisted intelligence, it fundamentally alters the nature of the task, and business models change accordingly.

For example, Netflix uses machine learning algorithms to do something media has never done before: suggest choices customers would probably not have found themselves, based not just on the customer’s patterns of behavior, but on those of the audience at large. A Netflix user, unlike a cable TV pay-per-view customer, can easily switch from one premium video to another without penalty, after just a few minutes. This gives consumers more control over their time. They use it to choose videos more tailored to the way they feel at any given moment. Every time that happens, the system records that observation and adjusts its recommendation list — and it enables Netflix to tailor its next round of videos to user preferences more accurately. This leads to reduced costs and higher profits per movie, and a more enthusiastic audience, which then enables more investments in personalization (and AI). Left outside this virtuous circle are conventional advertising and television networks. No wonder other video channels, such as HBO and Amazon, as well as recorded music channels such as Spotify, have moved to similar models.

Over time, as algorithms grow more sophisticated, the symbiotic relationship between human and AI will further change entertainment industry practices. The unit of viewing decision will probably become the scene, not the story; algorithms will link scenes to audience emotions. A consumer might ask to see only scenes where a Meryl Streep character is falling in love, or to trace a particular type of swordplay from one action movie to another. Data accumulating from these choices will further refine the ability of the entertainment industry to spark people’s emotions, satisfy their curiosity, and gain their loyalty.

Another current use of augmented intelligence is in legal research. Though most cases are searchable online, finding relevant precedents still requires many hours of sifting through past opinions. Luminance, a startup specializing in legal research, can run through thousands of cases in a very short time, providing inferences about their relevance to a current proceeding. Systems like these don’t yet replace human legal research. But they dramatically reduce the rote work conducted by associate attorneys, a job rated as the least satisfying in the United States. Similar applications are emerging for other types of data sifting, including financial audits, interpreting regulations, finding patterns in epidemiological data, and (as noted above) farming.

To develop applications like these, you’ll need to marshal your own imagination to look for products, services, or processes that would not be possible at all without AI. For example, an AI system can track a wide number of product features, warranty costs, repeat purchase rates, and more general purchasing metrics, bringing only unusual or noteworthy correlations to your attention. Are a high number of repairs associated with a particular region, material, or line of products? Could you use this information to redesign your products, avoid recalls, or spark innovation in some way?

The success of an augmented intelligence effort depends on whether it has enabled your company to do new things. To assess this capability, track your margins, innovation cycles, customer experience, and revenue growth as potential proxies. Also watch your impact on disruption: Are your new innovations doing to some part of the business ecosystem what, say, ride-hailing services are doing to conventional taxi companies?

You won’t find many off-the-shelf applications for augmented intelligence. They involve advanced forms of machine learning and natural language processing, plus specialized interfaces tailored to your company and industry. However, you can build bespoke augmented intelligence applications on cloud-based enterprise platforms, most of which allow modifications in open source code. Given the unstructured nature of your most critical decision processes, an augmented intelligence application would require voluminous historical data from your own company, along with data from the rest of your industry and related fields (such as demographics). This will help the system distinguish external factors, such as competition and economic conditions, from the impact of your own decisions.

The greatest change from augmented intelligence may be felt by senior decision makers, as the new models often give them new alternatives to consider that don’t match their past experience or gut feelings. They should be open to those alternatives, but also skeptical. AI systems are not infallible; just like any human guide, they must show consistency, explain their decisions, and counter biases, or they will lose their value.

Autonomous Intelligence

Very few autonomous intelligence systems — systems that make decisions without direct human involvement or oversight — are in widespread use today. Early examples include automated trading in the stock market (about 75 percent of Nasdaq trading is conducted autonomously) and facial recognition. In some circumstances, algorithms are better than people at identifying other people. Other early examples include robots that dispose of bombs, gather deep-sea data, maintain space stations, and perform other tasks inherently unsafe for people.

The most eagerly anticipated forms of autonomous intelligence — self-driving cars and full-fledged language translation programs — are not yet ready for general use. The closest autonomous service so far is Tencent’s messaging and social media platform WeChat, which has close to 800 million daily active users, most of them in China. The program, which was designed primarily for use on smartphones, offers relatively sophisticated voice recognition, Chinese-to-English language translation, facial recognition (including suggestions of celebrities who look like the person holding the phone), and virtual bot friends that can play guessing games. Notwithstanding their cleverness and their pioneering use of natural language processing, these are still niche applications, and still very limited by technology. Some of the most popular AI apps, for example, are small, menu- and rule-driven programs, which conduct fairly rudimentary conversations around a limited group of options.

Despite the lead time required to bring the technology further along, any business prepared to base a strategy on advanced digital technology should be thinking seriously about autonomous intelligence now. The Internet of Things will generate vast amounts of information, more than humans can reasonably interpret. In commercial aircraft, for example, so much flight data is gathered that engineers can’t process it all; thus, Boeing has announced a $7.5 million partnership with Carnegie Mellon University, along with other efforts to develop AI systems that can, for example, predict when airplanes will need maintenance. Autonomous intelligence’s greatest challenge may not be technological at all — it may be companies’ ability to build in enough transparency for people to trust these systems to act in their best interest.

First Steps

As you contemplate the introduction of artificial intelligence, articulate what mix of the three approaches works best for you.

• Are you primarily interested in upgrading your existing processes, reducing costs, and improving productivity? If so, then start with assisted intelligence, probably with a small group of services from a cloud-based provider.

• Do you seek to build your business around something new — responsive and self-driven products, or services and experiences that incorporate AI? Then pursue an augmented intelligence approach, probably with more complex AI applications resident on the cloud.

• Are you developing a genuinely new technology? Most companies will be better off primarily using someone else’s AI platforms, but if you can justify building your own, you may become one of the leaders in your market.

The transition among these forms of AI is not clean-cut; they sit on a continuum. In developing their own AI strategy, many companies begin somewhere between assisted and augmented, while expecting to move toward autonomous eventually (see Exhibit 2).

Though investments in AI may seem expensive now, the costs will decline over the next 10 years as the software becomes more commoditized. “As this technology continues to mature,” writes Daniel Eckert, a managing director in emerging technology services for PwC US, “we should see the price adhere toward a utility model and flatten out. We expect a tiered pricing model to be introduced: a free (or freemium model) for simple activities, and a premium model for discrete, business-differentiating services.”

AI is often sold on the premise that it will replace human labor at lower cost — and the effect on employment could be devastating, though no one knows for sure. Carl Benedikt Frey and Michael Osborne of Oxford University’s engineering school have calculated that AI will put 47 percent of the jobs in the U.S. at risk; a 2016 Forrester research report estimated it at 6 percent, at least by 2025. On the other hand, Baidu Research head (and deep learning pioneer) Andrew Ng recently said, “AI is the new electricity,” meaning that it will be found everywhere and create new jobs that weren’t imaginable before its appearance.

At the same time that AI threatens the loss of an almost unimaginable number of jobs, it is also a hungry, unsatisfied employer. The lack of capable talent — people skilled in deep learning technology and analytics — may well turn out to be the biggest obstacle for large companies. The greatest opportunities may thus be for independent businesspeople, including farmers like Jeff Heepke, who no longer need scale to compete with large companies, because AI has leveled the playing field.

It is still too early to say which types of companies will be the most successful in this area — and we don’t yet have an AI model to predict it for us. In the end, we cannot even say for sure that the companies that enter the field first will be the most successful. The dominant players will be those that, like Climate Corporation, Oscar W. Larson, Netflix, and many other companies large and small, have taken AI to heart as a way to become far more capable, in a far more relevant way, than they otherwise would ever be.

Author Profile:

  • Anand Rao is a principal with PwC US based in Boston. He is an innovation leader for PwC’s data and analytics consulting services. He holds a Ph.D. in artificial intelligence from the University of Sydney and was formerly chief research scientist at the Australian Artificial Intelligence Institute.
  • Also contributing to this article were PwC principal and assurance innovation leader Michael Baccala, PwC senior research fellow Alan Morrison, and writer Michael Fitzgerald.

Resources