Competitive Intelligence: Why You Shouldn’t Be Afraid of Artificial Intelligence by Lili Cheng

Alexander the friendly robot visits the Indoor Park to interact with children by telling classic fairy tales, singing and dancing at Westfield London on August 10, 2016 in London, England.

Alexander the friendly robot visits the Indoor Park to interact with children by telling classic fairy tales, singing and dancing at Westfield London on August 10, 2016 in London, England.
Jeff Spicer—Getty Images

 

Artificial intelligence is one of the hottest, least understood and most debated technological breakthroughs in modern times. In many ways, the magic of AI is that it’s not something you can see or touch. You may not even realize you are using it today. When your Nest thermostat knows how to set the right temperature at home or when your phone automatically corrects your grammar or when a Tesla car navigates a road autonomously–that’s AI at work.

For most of our lives, people have had to adapt to technology. To find a file on a computer, we input a command on a keyboard attached to one particular machine. To make a phone call, we tap an assortment of numbers on a keypad. To get a piece of information, we type a specific set of keywords into a search engine.

AI is turning that dynamic on its head by creating technologies that adapt to us rather than the other way around–new ways of interacting with computers that won’t seem like computing at all.

Computer scientists have been working on AI technologies for decades, and we’re now seeing that work bear fruit. Recent breakthroughs, based on computers’ ability to understand speech and language, and have vision, have given rise to our technology “alter ego”–a personal guide that knows your habits and communication preferences, and helps you schedule your time, motivate your team to do their best work, or be, say, a better parent. Those same achievements have divided leading voices inside the world of technology about the potential pitfalls that may accompany this progress.

Core to the work I do on conversational AI is how we model language–not only inspired by technical advances, but also by insight from our best and brightest thinkers on the way people use words. To do so, we revisit ideas in books, such as Steven Pinker’s The Stuff of Thought, that give us closer looks at the complexity of human language, which combines logical rules with the unpredictability of human passion.

Humanity’s most important moments are often those risky interactions where emotion comes into play–like a date or a business negotiation–and people use vague, ambiguous language to take social risks. AI that understands language needs to combine the logical and unpredictable ways people interact. This likely means AI needs to recognize when people are more effective on their own–when to get out of the way, when not to help, when not to record, when not to interrupt or distract.

The advances that AI is bringing to our world have been a half-century in the making. But AI’s time is now. Because of the vast amounts of data in our world, only the almost limitless computing power of the cloud can make sense of it. AI can truly help solve some of the world’s most vexing problems, from improving day-to-day communication to energy, climate, health care, transportation and more. The real magic of AI, in the end, won’t be magic at all. It will be technology that adapts to people. This will be profoundly transformational for humans and for humanity.

Source: TIME, January 4, 2018 , Cheng is a corporate vice president of Microsoft AI & Research

Advertisements

Competitive Intelligence: Artificial Intelligence for the Real World, by Thomas H. Davenport and Rajeev Ronanki

In 2013, the MD Anderson Cancer Center launched a “moon shot” project: diagnose and recommend treatment plans for certain forms of cancer using IBM’s Watson cognitive system.

But in 2017, the project was put on hold after costs topped $62 million—and the system had yet to be used on patients. At the same time, the cancer center’s IT group was experimenting with using cognitive technologies to do much less ambitious jobs, such as making hotel and restaurant recommendations for patients’ families, determining which patients needed help paying bills, and addressing staff IT problems. The results of these projects have been much more promising: The new systems have contributed to increased patient satisfaction, improved financial performance, and a decline in time spent on tedious data entry by the hospital’s care managers.

Despite the setback on the moon shot, MD Anderson remains committed to using cognitive technology—that is, next-generation artificial intelligence—to enhance cancer treatment, and is currently developing a variety of new projects at its center of competency for cognitive computing.

The contrast between the two approaches is relevant to anyone planning AI initiatives. Our survey of 250 executives who are familiar with their companies’ use of cognitive technology shows that three-quarters of them believe that AI will substantially transform their companies within three years.

However, our study of 152 projects in almost as many companies also reveals that highly ambitious moon shots are less likely to be successful than “low-hanging fruit” projects that enhance business processes.

This shouldn’t be surprising—such has been the case with the great majority of new technologies that companies have adopted in the past. But the hype surrounding artificial intelligence has been especially powerful, and some organizations have been seduced by it.

In this article, we’ll look at the various categories of AI being employed and provide a framework for how companies should begin to build up their cognitive capabilities in the next several years to achieve their business objectives.

Three Types of AI

It is useful for companies to look at AI through the lens of business capabilities rather than technologies. Broadly speaking, AI can support three important business needs: automating business processes, gaining insight through data analysis, and engaging with customers and employees.

Process automation.

Of the 152 projects we studied, the most common type was the automation of digital and physical tasks—typically back-office administrative and financial activities—using robotic process automation technologies.

RPA is more advanced than earlier business-process automation tools, because the “robots” (that is, code on a server) act like a human inputting and consuming information from multiple IT systems. Tasks include:

  • transferring data from e-mail and call center systems into systems of record—for example, updating customer files with address changes or service additions;
  • replacing lost credit or ATM cards, reaching into multiple systems to update records and handle customer communications;
  • reconciling failures to charge for services across billing systems by extracting information from multiple document types; and
  • “reading” legal and contractual documents to extract provisions using natural language processing.

RPA is the least expensive and easiest to implement of the cognitive technologies we’ll discuss here, and typically brings a quick and high return on investment. (It’s also the least “smart” in the sense that these applications aren’t programmed to learn and improve, though developers are slowly adding more intelligence and learning capability.) It is particularly well suited to working across multiple back-end systems.

At NASA, cost pressures led the agency to launch four RPA pilots in accounts payable and receivable, IT spending, and human resources—all managed by a shared services center. The four projects worked well—in the HR application, for example, 86% of transactions were completed without human intervention—and are being rolled out across the organization. NASA is now implementing more RPA bots, some with higher levels of intelligence. As Jim Walker, project leader for the shared services organization notes, “So far it’s not rocket science.”

One might imagine that robotic process automation would quickly put people out of work. But across the 71 RPA projects we reviewed (47% of the total), replacing administrative employees was neither the primary objective nor a common outcome. Only a few projects led to reductions in head count, and in most cases, the tasks in question had already been shifted to outsourced workers. As technology improves, robotic automation projects are likely to lead to some job losses in the future, particularly in the offshore business-process outsourcing industry. If you can outsource a task, you can probably automate it.

Cognitive insight.

The second most common type of project in our study (38% of the total) used algorithms to detect patterns in vast volumes of data and interpret their meaning. Think of it as “analytics on steroids.” These machine-learning applications are being used to:

  • predict what a particular customer is likely to buy;
  • identify credit fraud in real time and detect insurance claims fraud;
  • analyze warranty data to identify safety or quality problems in automobiles and other manufactured products;
  • automate personalized targeting of digital ads; and
  • provide insurers with more-accurate and detailed actuarial modeling.

Cognitive insights provided by machine learning differ from those available from traditional analytics in three ways: They are usually much more data-intensive and detailed, the models typically are trained on some part of the data set, and the models get better—that is, their ability to use new data to make predictions or put things into categories improves over time.

Versions of machine learning (deep learning, in particular, which attempts to mimic the activity in the human brain in order to recognize patterns) can perform feats such as recognizing images and speech.

Machine learning can also make available new data for better analytics. While the activity of data curation has historically been quite labor-intensive, now machine learning can identify probabilistic matches—data that is likely to be associated with the same person or company but that appears in slightly different formats—across databases.

GE has used this technology to integrate supplier data and has saved $80 million in its first year by eliminating redundancies and negotiating contracts that were previously managed at the business unit level.

Similarly, a large bank used this technology to extract data on terms from supplier contracts and match it with invoice numbers, identifying tens of millions of dollars in products and services not supplied.

Deloitte’s audit practice is using cognitive insight to extract terms from contracts, which enables an audit to address a much higher proportion of documents, often 100%, without human auditors’ having to painstakingly read through them.

Cognitive insight applications are typically used to improve performance on jobs only machines can do—tasks such as programmatic ad buying that involve such high-speed data crunching and automation that they’ve long been beyond human ability—so they’re not generally a threat to human jobs.

Cognitive engagement.

Projects that engage employees and customers using natural language processing chatbots, intelligent agents, and machine learning were the least common type in our study (accounting for 16% of the total). This category includes:

  • intelligent agents that offer 24/7 customer service addressing a broad and growing array of issues from password requests to technical support questions—all in the customer’s natural language;
  • internal sites for answering employee questions on topics including IT, employee benefits, and HR policy;
  • product and service recommendation systems for retailers that increase personalization, engagement, and sales—typically including rich language or images; and
  • health treatment recommendation systems that help providers create customized care plans that take into account individual patients’ health status and previous treatments.

The companies in our study tended to use cognitive engagement technologies more to interact with employees than with customers. That may change as firms become more comfortable turning customer interactions over to machines.

Vanguard, for example, is piloting an intelligent agent that helps its customer service staff answer frequently asked questions. The plan is to eventually allow customers to engage with the cognitive agent directly, rather than with the human customer-service agents.

SEBank, in Sweden, and the medical technology giant Becton, Dickinson, in the United States, are using the lifelike intelligent-agent avatar Amelia to serve as an internal employee help desk for IT support. SEBank has recently made Amelia available to customers on a limited basis in order to test its performance and customer response.

R1801H_DAVENPORT_BENEFITS.png

Companies tend to take a conservative approach to customer-facing cognitive engagement technologies largely because of their immaturity. Facebook, for example, found that its Messenger chatbots couldn’t answer 70% of customer requests without human intervention. As a result, Facebook and several other firms are restricting bot-based interfaces to certain topic domains or conversation types.

Our research suggests that cognitive engagement apps are not currently threatening customer service or sales rep jobs. In most of the projects we studied, the goal was not to reduce head count but to handle growing numbers of employee and customer interactions without adding staff.

Some organizations were planning to hand over routine communications to machines, while transitioning customer-support personnel to more-complex activities such as handling customer issues that escalate, conducting extended unstructured dialogues, or reaching out to customers before they call in with problems.

As companies become more familiar with cognitive tools, they are experimenting with projects that combine elements from all three categories to reap the benefits of AI. An Italian insurer, for example, developed a “cognitive help desk” within its IT organization. The system engages with employees using deep-learning technology (part of the cognitive insights category) to search frequently asked questions and answers, previously resolved cases, and documentation to come up with solutions to employees’ problems. It uses a smart-routing capability (business process automation) to forward the most complex problems to human representatives, and it uses natural language processing to support user requests in Italian.

Despite their rapidly expanding experience with cognitive tools, however, companies face significant obstacles in development and implementation. On the basis of our research, we’ve developed a four-step framework for integrating AI technologies that can help companies achieve their objectives, whether the projects are moon shoots or business-process enhancements.

1. Understanding The Technologies

Before embarking on an AI initiative, companies must understand which technologies perform what types of tasks, and the strengths and limitations of each. Rule-based expert systems and robotic process automation, for example, are transparent in how they do their work, but neither is capable of learning and improving.

Deep learning, on the other hand, is great at learning from large volumes of labeled data, but it’s almost impossible to understand how it creates the models it does. This “black box” issue can be problematic in highly regulated industries such as financial services, in which regulators insist on knowing why decisions are made in a certain way.

We encountered several organizations that wasted time and money pursuing the wrong technology for the job at hand. But if they’re armed with a good understanding of the different technologies, companies are better positioned to determine which might best address specific needs, which vendors to work with, and how quickly a system can be implemented. Acquiring this understanding requires ongoing research and education, usually within IT or an innovation group.

R1801H_DAVENPORT_CHALLENGES.png

In particular, companies will need to leverage the capabilities of key employees, such as data scientists, who have the statistical and big-data skills necessary to learn the nuts and bolts of these technologies. A main success factor is your people’s willingness to learn. Some will leap at the opportunity, while others will want to stick with tools they’re familiar with. Strive to have a high percentage of the former.

If you don’t have data science or analytics capabilities in-house, you’ll probably have to build an ecosystem of external service providers in the near term. If you expect to be implementing longer-term AI projects, you will want to recruit expert in-house talent. Either way, having the right capabilities is essential to progress.

Given the scarcity of cognitive technology talent, most organizations should establish a pool of resources—perhaps in a centralized function such as IT or strategy—and make experts available to high-priority projects throughout the organization. As needs and talent proliferate, it may make sense to dedicate groups to particular business functions or units, but even then a central coordinating function can be useful in managing projects and careers.

2. Creating a Portfolio of Projects

The next step in launching an AI program is to systematically evaluate needs and capabilities and then develop a prioritized portfolio of projects. In the companies we studied, this was usually done in workshops or through small consulting engagements. We recommend that companies conduct assessments in three broad areas.

Identifying the opportunities.

The first assessment determines which areas of the business could benefit most from cognitive applications. Typically, they are parts of the company where “knowledge”—insight derived from data analysis or a collection of texts—is at a premium but for some reason is not available.

  • Bottlenecks. In some cases, the lack of cognitive insights is caused by a bottleneck in the flow of information; knowledge exists in the organization, but it is not optimally distributed. That’s often the case in health care, for example, where knowledge tends to be siloed within practices, departments, or academic medical centers.
  • Scaling challenges. In other cases, knowledge exists, but the process for using it takes too long or is expensive to scale. Such is often the case with knowledge developed by financial advisers. That’s why many investment and wealth management firms now offer AI-supported “robo-advice” capabilities that provide clients with cost-effective guidance for routine financial issues.
  • In the pharmaceutical industry, Pfizer is tackling the scaling problem by using IBM’s Watson to accelerate the laborious process of drug-discovery research in immuno-oncology, an emerging approach to cancer treatment that uses the body’s immune system to help fight cancer. Immuno-oncology drugs can take up to 12 years to bring to market. By combining a sweeping literature review with Pfizer’s own data, such as lab reports, Watson is helping researchers to surface relationships and find hidden patterns that should speed the identification of new drug targets, combination therapies for study, and patient selection strategies for this new class of drugs.
  • Inadequate firepower. Finally, a company may collect more data than its existing human or computer firepower can adequately analyze and apply. For example, a company may have massive amounts of data on consumers’ digital behavior but lack insight about what it means or how it can be strategically applied. To address this, companies are using machine learning to support tasks such as programmatic buying of personalized digital ads or, in the case of Cisco Systems and IBM, to create tens of thousands of “propensity models” for determining which customers are likely to buy which products.

Determining the use cases.

The second area of assessment evaluates the use cases in which cognitive applications would generate substantial value and contribute to business success. Start by asking key questions such as: How critical to your overall strategy is addressing the targeted problem? How difficult would it be to implement the proposed AI solution—both technically and organizationally? Would the benefits from launching the application be worth the effort? Next, prioritize the use cases according to which offer the most short- and long-term value, and which might ultimately be integrated into a broader platform or suite of cognitive capabilities to create competitive advantage.

Selecting the technology.

The third area to assess examines whether the AI tools being considered for each use case are truly up to the task. Chatbots and intelligent agents, for example, may frustrate some companies because most of them can’t yet match human problem solving beyond simple scripted cases (though they are improving rapidly). Other technologies, like robotic process automation that can streamline simple processes such as invoicing, may in fact slow down more-complex production systems. And while deep learning visual recognition systems can recognize images in photos and videos, they require lots of labeled data and may be unable to make sense of a complex visual field.

In time, cognitive technologies will transform how companies do business. Today, however, it’s wiser to take incremental steps with the currently available technology while planning for transformational change in the not-too-distant future. You may ultimately want to turn customer interactions over to bots, for example, but for now it’s probably more feasible—and sensible—to automate your internal IT help desk as a step toward the ultimate goal.

3. Launching Pilots

Because the gap between current and desired AI capabilities is not always obvious, companies should create pilot projects for cognitive applications before rolling them out across the entire enterprise.

Proof-of-concept pilots are particularly suited to initiatives that have high potential business value or allow the organization to test different technologies at the same time. Take special care to avoid “injections” of projects by senior executives who have been influenced by technology vendors. Just because executives and boards of directors may feel pressure to “do something cognitive” doesn’t mean you should bypass the rigorous piloting process. Injected projects often fail, which can significantly set back the organization’s AI program.

If your firm plans to launch several pilots, consider creating a cognitive center of excellence or similar structure to manage them. This approach helps build the needed technology skills and capabilities within the organization, while also helping to move small pilots into broader applications that will have a greater impact. Pfizer has more than 60 projects across the company that employ some form of cognitive technology; many are pilots, and some are now in production.

At Becton, Dickinson, a “global automation” function within the IT organization oversees a number of cognitive technology pilots that use intelligent digital agents and RPA (some work is done in partnership with the company’s Global Shared Services organization). The global automation group uses end-to-end process maps to guide implementation and identify automation opportunities. The group also uses graphical “heat maps” that indicate the organizational activities most amenable to AI interventions. The company has successfully implemented intelligent agents in IT support processes, but as yet is not ready to support large-scale enterprise processes, like order-to-cash. The health insurer Anthem has developed a similar centralized AI function that it calls the Cognitive Capability Office.

Business-process redesign.

As cognitive technology projects are developed, think through how workflows might be redesigned, focusing specifically on the division of labor between humans and the AI. In some cognitive projects, 80% of decisions will be made by machines and 20% will be made by humans; others will have the opposite ratio. Systematic redesign of workflows is necessary to ensure that humans and machines augment each other’s strengths and compensate for weaknesses.

The investment firm Vanguard, for example, has a new “Personal Advisor Services” (PAS) offering, which combines automated investment advice with guidance from human advisers. In the new system, cognitive technology is used to perform many of the traditional tasks of investment advising, including constructing a customized portfolio, rebalancing portfolios over time, tax loss harvesting, and tax-efficient investment selection.

Vanguard’s human advisers serve as “investing coaches,” tasked with answering investor questions, encouraging healthy financial behaviors, and being, in Vanguard’s words, “emotional circuit breakers” to keep investors on plan. Advisers are encouraged to learn about behavioral finance to perform these roles effectively. The PAS approach has quickly gathered more than $80 billion in assets under management, costs are lower than those for purely human-based advising, and customer satisfaction is high.

Vanguard understood the importance of work redesign when implementing PAS, but many companies simply “pave the cow path” by automating existing work processes, particularly when using RPA technology. By automating established workflows, companies can quickly implement projects and achieve ROI—but they forgo the opportunity to take full advantage of AI capabilities and substantively improve the process.

Cognitive work redesign efforts often benefit from applying design-thinking principles: understanding customer or end-user needs, involving employees whose work will be restructured, treating designs as experimental “first drafts,” considering multiple alternatives, and explicitly considering cognitive technology capabilities in the design process. Most cognitive projects are also suited to iterative, agile approaches to development.

4. Scaling Up

Many organizations have successfully launched cognitive pilots, but they haven’t had as much success rolling them out organization-wide. To achieve their goals, companies need detailed plans for scaling up, which requires collaboration between technology experts and owners of the business process being automated. Because cognitive technologies typically support individual tasks rather than entire processes, scale-up almost always requires integration with existing systems and processes. Indeed, in our survey, executives reported that such integration was the greatest challenge they faced in AI initiatives.

Companies should begin the scaling-up process by considering whether the required integration is even possible or feasible. If the application depends on special technology that is difficult to source, for example, that will limit scale-up. Make sure your business process owners discuss scaling considerations with the IT organization before or during the pilot phase: An end run around IT is unlikely to be successful, even for relatively simple technologies like RPA.

The health insurer Anthem, for example, is taking on the development of cognitive technologies as part of a major modernization of its existing systems. Rather than bolting new cognitive apps onto legacy technology, Anthem is using a holistic approach that maximizes the value being generated by the cognitive applications, reduces the overall cost of development and integration, and creates a halo effect on legacy systems. The company is also redesigning processes at the same time to, as CIO Tom Miller puts it, “use cognitive to move us to the next level.”

In scaling up, companies may face substantial change-management challenges. At one U.S. apparel retail chain, for example, the pilot project at a small subset of stores used machine learning for online product recommendations, predictions for optimal inventory and rapid replenishment models, and—most difficult of all—merchandising. Buyers, used to ordering product on the basis of their intuition, felt threatened and made comments like “If you’re going to trust this, what do you need me for?” After the pilot, the buyers went as a group to the chief merchandising officer and requested that the program be killed. The executive pointed out that the results were positive and warranted expanding the project. He assured the buyers that, freed of certain merchandising tasks, they could take on more high-value work that humans can still do better than machines, such as understanding younger customers’ desires and determining apparel manufacturers’ future plans. At the same time, he acknowledged that the merchandisers needed to be educated about a new way of working.

If scale-up is to achieve the desired results, firms must also focus on improving productivity. Many, for example, plan to grow their way into productivity—adding customers and transactions without adding staff. Companies that cite head count reduction as the primary justification for the AI investment should ideally plan to realize that goal over time through attrition or from the elimination of outsourcing.

The Future Cognitive Company

Our survey and interviews suggest that managers experienced with cognitive technology are bullish on its prospects. Although the early successes are relatively modest, we anticipate that these technologies will eventually transform work. We believe that companies that are adopting AI in moderation now—and have aggressive implementation plans for the future—will find themselves as well positioned to reap benefits as those that embraced analytics early on.

Through the application of AI, information-intensive domains such as marketing, health care, financial services, education, and professional services could become simultaneously more valuable and less expensive to society. Business drudgery in every industry and function—overseeing routine transactions, repeatedly answering the same questions, and extracting data from endless documents—could become the province of machines, freeing up human workers to be more productive and creative. Cognitive technologies are also a catalyst for making other data-intensive technologies succeed, including autonomous vehicles, the Internet of Things, and mobile and multichannel consumer technologies.

The great fear about cognitive technologies is that they will put masses of people out of work. Of course, some job loss is likely as smart machines take over certain tasks traditionally done by humans. However, we believe that most workers have little to fear at this point. Cognitive systems perform tasks, not entire jobs. The human job losses we’ve seen were primarily due to attrition of workers who were not replaced or through automation of outsourced work. Most cognitive tasks currently being performed augment human activity, perform a narrow task within a much broader job, or do work that wasn’t done by humans in the first place, such as big-data analytics.

Most managers with whom we discuss the issue of job loss are committed to an augmentation strategy—that is, integrating human and machine work, rather than replacing humans entirely. In our survey, only 22% of executives indicated that they considered reducing head count as a primary benefit of AI.

We believe that every large company should be exploring cognitive technologies. There will be some bumps in the road, and there is no room for complacency on issues of workforce displacement and the ethics of smart machines. But with the right planning and development, cognitive technology could usher in a golden age of productivity, work satisfaction, and prosperity.

A version of this article appeared in the January–February 2018 issue (pp.108–116) of Harvard Business Review.

Thomas H. Davenport is the President’s Distinguished Professor in Management and Information Technology at Babson College, a research fellow at the MIT Initiative on the Digital Economy, and a senior adviser at Deloitte Analytics. Author of over a dozen management books, his latest is Only Humans Need Apply: Winners and Losers in the Age of Smart Machines


Rajeev Ronanki is a principal at Deloitte Consulting, where he leads the cognitive computing and health care innovation practices. Some of the companies mentioned in this article are Deloitte clients.

Source: Harvard Business Review, FROM THE JANUARY–FEBRUARY 2018 ISSUE. 

Inteligência Competitiva Tecnológica: McKinsey’s State Of Machine Learning And AI, 2017

These and other findings are from the McKinsey Global Institute Study, and discussion paper, Artificial Intelligence, The Next Digital Frontier (80 pp., PDF, free, no opt-in) published last month. McKinsey Global Institute published an article summarizing the findings titled   How Artificial Intelligence Can Deliver Real Value To Companies. McKinsey interviewed more than 3,000 senior executives on the use of AI technologies, their companies’ prospects for further deployment, and AI’s impact on markets, governments, and individuals.  McKinsey Analytics was also utilized in the development of this study and discussion paper.

Key takeaways from the study include the following:

  • Tech giants including Baidu and Google spent between $20B to $30B on AI in 2016, with 90% of this spent on R&D and deployment, and 10% on AI acquisitions. The current rate of AI investment is 3X the external investment growth since 2013. McKinsey found that 20% of AI-aware firms are early adopters, concentrated in the high-tech/telecom, automotive/assembly and financial services industries. The graphic below illustrates the trends the study team found during their analysis.

Source: McKinsey Global Institute, Artificial Intelligence, The Next Digital Frontier

  • AI is turning into a race for patents and intellectual property (IP) among the world’s leading tech companies. McKinsey found that only a small percentage (up to 9%) of Venture Capital (VC), Private Equity (PE), and other external funding. Of all categories that have publically available data, M&A grew the fastest between 2013 And 2016 (85%).The report cites many examples of internal development including Amazon’s investments in robotics and speech recognition, and Salesforce on virtual agents and machine learning. BMW, Tesla, and Toyota lead auto manufacturers in their investments in robotics and machine learning for use in driverless cars. Toyota is planning to invest $1B in establishing a new research institute devoted to AI for robotics and driverless vehicles.

Source: McKinsey Global Institute, Artificial Intelligence, The Next Digital Frontier

  • McKinsey estimates that total annual external investment in AI was between $8B to $12B in 2016, with machine learning attracting nearly 60% of that investment. Robotics and speech recognition are two of the most popular investment areas. Investors are most favoring machine learning startups due to quickness code-based start-ups have at scaling up to include new features fast. Software-based machine learning startups are preferred over their more cost-intensive machine-based robotics counterparts that often don’t have their software counterparts do. As a result of these factors and more, Corporate M&A is soaring in this area with the Compound Annual Growth Rate (CAGR) reaching approximately 80% from 20-13 to 2016. The following graphic illustrates the distribution of external investments by category from the study.

Source: McKinsey Global Institute, Artificial Intelligence, The Next Digital Frontier

https://d-7727309669646905.ampproject.net/1502408222412/frame.html

  • High tech, telecom, and financial services are the leading early adopters of machine learning and AI. These industries are known for their willingness to invest in new technologies to gain competitive and internal process efficiencies. Many startups have also had their start by concentrating on the digital challenges of this industries as well. The MGI Digitization Index is a GDP-weighted average of Europe and the United States. See Appendix B of the study for a full list of metrics and explanation of methodology. McKinsey also created an overall AI index shown in the first column below that compares key performance indicators (KPIs) across assets, usage, and labor where AI could make a contribution. The following is a heat map showing the relative level of AI adoption by industry and key area of asset, usage, and labor category.

Source: McKinsey Global Institute, Artificial Intelligence, The Next Digital Frontier

  • McKinsey predicts High Tech, Communications, and Financial Services will be the leading industries to adopt AI in the next three years. The competition for patents and intellectual property (IP) in these three industries is accelerating. Devices, products and services available now and on the roadmaps of leading tech companies will over time reveal the level of innovative activity going on in their R&D labs today. In financial services, for example, there are clear benefits from improved accuracy and speed in AI-optimized fraud-detection systems, forecast to be a $3B market in 2020. The following graphic provides an overview of sectors or industries leading in AI addition today and who intend to grow their investments the most in the next three years.

Source: McKinsey Global Institute, Artificial Intelligence, The Next Digital Frontier

  • Healthcare, financial services, and professional services are seeing the greatest increase in their profit margins as a result of AI adoption. McKinsey found that companies who benefit from senior management support for AI initiatives have invested in infrastructure to support its scale and have clear business goals achieve 3 to 15% percentage point higher profit margin. Of the over 3,000 business leaders who were interviewed as part of the survey, the majority expect margins to increase by up to 5% points in the next year.

Source: McKinsey Global Institute, Artificial Intelligence, The Next Digital Frontier

  • Amazon has achieved impressive results from its $775 million acquisition of Kiva, a robotics company that automates picking and packing according to the McKinsey study. “Click to ship” cycle time, which ranged from 60 to 75 minutes with humans, fell to 15 minutes with Kiva, while inventory capacity increased by 50%. Operating costs fell an estimated 20%, giving a return of close to 40% on the original investment
  • Netflix has also achieved impressive results from the algorithm it uses to personalize recommendations to its 100 million subscribers worldwide. Netflix found that customers, on average, give up 90 seconds after searching for a movie. By improving search results, Netflix projects that they have avoided canceled subscriptions that would reduce its revenue by $1B annually.

Marketers need to start giving millennials what they want: artificial intelligence

Artificial intelligence is no longer a figment of the sci-fi writer’s imagination, nor is it something to be feared – Siri made sure of that. As a generation grows up with AI in their pockets, marketers need to stop shying away from it and embrace it, writes Mailee Creacy.

Elon Musk and Mark Zuckerberg are the latest high-profile business leaders to butt heads over the use of artificial intelligence, with the two trading barbs over the regulation of AI.

Musk has issued warnings about the potential dangers of the technology while Zuckerberg believes people shouldn’t slow down progress.

Zuckerberg is a fan

They aren’t the first to have opposing opinions on the topic, with Microsoft’s Bill Gates and Professor Stephen Hawking also advising caution, while Amazon’s CEO Jeff Bezos and IBM Watson SVP, David Kelly, have encouraged the development of the technology.

While industry leaders are touting their opinions about the future of AI, the general public also have strong opinions about the way they believe it will affect their lives.

The phrase ‘artificial intelligence’ may prompt fear in some people’s minds – perhaps preconditioned by Hollywood and sci-fi dramatisationsof malevolent robots – however, most people are accepting of the use of AI in their everyday lives.

Whether it be the apps on our phone, the virtual personal assistant in our living room, or the self-driving cars we’re increasingly seeing in the news and soon on our streets, we are incrementally becoming more exposed to artificial intelligence in its varied forms.

The generation leading the AI transformation – millennials – believe there is nothing to fear from AI.

It’s only natural that sentiments towards artificial intelligence will reflect popular usage and exposure. Millennial males are typically the quickest adopters of new technology, so it follows that they’re the demographic most at ease with the concept of artificial intelligence.

Research released on consumer perceptions of AI proves that younger generations are open-minded when it comes to the use of AI. The survey showed millennial males are most likely to find artificial intelligence exciting (80%), least likely to be fearful of it (only 13%), and most likely to think that it will improve their job in the next five years (47%).

What does this mean for marketers?

For brand marketers, capturing the attention of millennials has long been the holy grail. Multiple screen and multiple channel users by nature, this generation demands more from the brands in their lives. Blanket marketing messages and one-way broadcasts don’t cut it, and advertisers reliant on the interruption-based practices of the past increasingly find themselves lacking in engagement.

This generation expects to be spoken to in a personalised way and accepts that businesses will anticipate their needs in advance. Netflix queues up TV shows they might like to binge watch. Google identifies when they should leave home in order to beat traffic. Now it seems this acceptance of technology that anticipates their needs extends to advertising.

The Consumer Perceptions of AI research proved that younger generations are open-minded when it comes to brands using artificial intelligence to inform their buying decisions. A clear majority of Australian millennials (74%) said they prefer brands to provide personalised advertising and offers.

This is great news for marketers who advertise online. It shows us that younger generations have become so accustomed to businesses using artificial intelligence to make their lives better that they also understand it may be used to present them with their ideal promotions, products and services.

They seem not only accepting, but expectant, and understand the data exchange that takes place now between brands and consumers – they provide information about themselves in return for interesting, entertaining and, most importantly, relevant content.

Mailee Creacy is Rocket Fuel’s general manager. 

Source: MAILEE CREACY, Mumbrella, August 7, 2017 2:57

A Strategist’s Guide to Artificial Intelligence by Anand Rao

Illustration by The Heads of State

Jeff Heepke knows where to plant corn on his 4,500-acre farm in Illinois because of artificial intelligence (AI). He uses a smartphone app called Climate Basic, which divides Heepke’s farmland (and, in fact, the entire continental U.S.) into plots that are 10 meters square. The app draws on local temperature and erosion records, expected precipitation, soil quality, and other agricultural data to determine how to maximize yields for each plot. If a rainy cold front is expected to pass by, Heepke knows which areas to avoid watering or irrigating that afternoon. As the U.S. Department of Agriculture noted, this use of artificial intelligence across the industry has produced the largest crops in the country’s history.

Climate Corporation, the Silicon Valley–based developer of Climate Basic, also offers a more advanced AI app that operates autonomously. If a storm hits a region, or a drought occurs, it adjusts local yield numbers downward. Farmers who have bought insurance to supplement their government coverage get a check; no questions asked, no paper filing necessary. The insurance companies and farmers both benefit from having a much less labor-intensive, more streamlined, and less expensive automated claims process.

Monsanto paid nearly US$1 billion to buy Climate Corporation in 2013, giving the company’s models added legitimacy. Since then, Monsanto has continued to upgrade the AI models, integrating data from farm equipment and sensors planted in the fields so that they improve their accuracy and insight as more data is fed into them. One result is a better understanding of climate change and its effects — for example, the northward migration of arable land for corn, or the increasing frequency of severe storms.

Applications like this are typical of the new wave of artificial intelligence in business. AI is generating new approaches to business models, operations, and the deployment of people that are likely to fundamentally change the way business operates. And if it can transform an earthbound industry like agriculture, how long will it be before your company is affected?

An Unavoidable Opportunity

Many business leaders are keenly aware of the potential value of artificial intelligence, but are not yet poised to take advantage of it. In PwC’s 2017 Digital IQ survey of senior executives worldwide, 54 percent of the respondents said they were making substantial investments in AI today. But only 20 percent said their organizations had the skills necessary to succeed with this technology (see “Winning with Digital Confidence,” by Chris Curran and Tom Puthiyamadam).

Reports on artificial intelligence tend to portray it as either a servant, making all technology more responsive, or an overlord, eliminating jobs and destroying privacy. But for business decision makers, AI is primarily an enabler of productivity. It will eliminate jobs, to be sure, but it will also fundamentally change work processes and might create jobs in the long run. The nature of decision making, collaboration, creative art, and scientific research will all be affected; so will enterprise structures. Technological systems, including potentially your products and services, as well as your office and factory equipment, will respond to people (and one another) in ways that feel as if they are coming to life.

In their book Artificial Intelligence: A Modern Approach (Pearson, 1995), Stuart Russell and Peter Norvig define AI as “the designing and building of intelligent agents that receive percepts from the environment and take actions that affect that environment.” The most critical difference between AI and general-purpose software is in the phrase “take actions.” AI enables machines to respond on their own to signals from the world at large, signals that programmers do not directly control and therefore can’t anticipate.

The fastest-growing category of AI is machine learning, or the ability of software to improve its own activity by analyzing interactions with the world at large (see “The Road to Deep Learning,” below). This technology, which has been a continual force in the history of computing since the 1940s, has grown dramatically in sophistication during the last few years.

The Road to Deep Learning

This may be the first moment in AI’s history when a majority of experts agree the technology has practical value. From its conceptual beginnings in the 1950s, led by legendary computer scientists such as Marvin Minsky and John McCarthy, its future viability has been the subject of fierce debate. As recently as 2000, the most proficient AI system was roughly comparable, in complexity, to the brain of a worm. Then, as high-bandwidth networking, cloud computing, and high-powered graphics-enabled microprocessors emerged, researchers began building multilayered neural networks — still extremely slow and limited in comparison with natural brains, but useful in practical ways.

The best-known AI triumphs — in which software systems beat expert human players in Jeopardy, chess, Go, poker, and soccer — differ from most day-to-day business applications. These games have prescribed rules and well-defined outcomes; every game ends in a win, loss, or tie. The games are also closed-loop systems: They affect only the players, not outsiders. The software can be trained through multiple failures with no serious risks. You can’t say the same of an autonomous vehicle crash, a factory failure, or a mistranslation.

There are currently two main schools of thought on how to develop the inference capabilities necessary for AI programs to navigate through the complexities of everyday life. In both, programs learn from experience — that is, the responses and reactions they get influence the way the programs act thereafter. The first approach uses conditional instructions (also known as heuristics) to accomplish this. For instance, an AI bot would interpret the emotions in a conversation by following a program that instructed it to start by checking for emotions that were evident in the recent past.

The second approach is known as machine learning. The machine is taught, using specific examples, to make inferences about the world around it. It then builds its understanding through this inference-making ability, without following specific instructions to do so. The Google search engine’s “next-word completion” feature is a good example of machine learning. Type in the word artificial, and several suggestions for the next word will appear, perhaps intelligence, selection, and insemination. No one has programmed it to seek those complements. Google chose the strategy of looking for the three words most frequently typed after artificial. With huge amounts of data available, machine learning can provide uncanny accuracy about patterns of behavior.

The type of machine learning called deep learning has become increasingly important. A deep learning system is a multilayered neural network that learns representations of the world and stores them as a nested hierarchy of concepts many layers deep. For example, when processing thousands of images, it recognizes objects based on a hierarchy of simpler building blocks: straight lines and curved lines at the basic level, then eyes, mouths, and noses, and then faces, and then specific facial features. Besides image recognition, deep learning appears to be a promising way to approach complex challenges such as speech comprehension, human–machine conversation, language translation, and vehicle navigation (see Exhibit A).

Though it is the closest machine to a human brain, a deep learning neural network is not suitable for all problems. It requires multiple processors with enormous computing power, far beyond conventional IT architecture; it will learn only by processing enormous amounts of data; and its decision processes are not transparent.

AI applications in daily use include all smartphone digital assistants, email programs that sort entries by importance, voice recognition systems, image recognition apps such as Facebook Picture Search, digital assistants such as Amazon Echo and Google Home, and much of the emerging Industrial Internet. Some AI apps are targeted at minor frustrations — DoNotPay, an online legal bot, has reversed thousands of parking tickets — and others, such as connected car and language translation technologies, represent fundamental shifts in the way people live. A growing number are aimed at improving human behavior; for instance, GM’s 2016 Chevrolet Malibu feeds data from sensors into a backseat driver–like guidance system for teenagers at the wheel.

Despite all this activity, the market for AI is still small. Market research firm Tractica estimated 2016 revenues at just $644 million. But it expects hockey stick–style growth, reaching $15 billion by 2022 and accelerating thereafter. In late 2016, there were about 1,500 AI-related startups in the U.S. alone, and total funding in 2016 reached a record $5 billion. Google, Facebook, Microsoft, Salesforce.com, and other tech companies are snapping up AI software companies, and large, established companies are recruiting deep learning talent and, like Monsanto, buying AI companies specializing in their markets. To make the most of this technology in your enterprise, consider the three main ways that businesses can or will use AI:

• Assisted intelligence, now widely available, improves what people and organizations are already doing.

• Augmented intelligence, emerging today, enables organizations and people to do things they couldn’t otherwise do.

• Autonomous intelligence, being developed for the future, creates and deploys machines that act on their own.

Many companies will make investments in all three during the next few years, drawing from a wide variety of applications (see Exhibit 1). They complement one another, but require different types of investment, different staffing considerations, and different business models.

Assisted Intelligence

Assisted intelligence amplifies the value of existing activity. For example, Google’s Gmail sorts incoming email into “Primary,” “Social,” and “Promotion” default tabs. The algorithm, trained with data from millions of other users’ emails, makes people more efficient without changing the way they use email or altering the value it provides.

Assisted intelligence tends to involve clearly defined, rules-based, repeatable tasks. These include automated assembly lines and other uses of physical robots; robotic process automation, in which software-based agents simulate the online activities of a human being; and back-office functions such as billing, finance, and regulatory compliance. This form of AI can be used to verify and cross-check data — for example, when paper checks are read and verified by a bank’s ATM. Assisted intelligence has already become common in some enterprise software processes. In “opportunity to order” (basic sales) and “order to cash” (receiving and processing customer orders), the software offers guidance and direction that was formerly available only from people.

The Oscar W. Larson Company used assisted intelligence to improve its field service operations. This is a 70-plus-year-old family-owned general contractor, which among other services to the oil and gas industry, provides maintenance and repair for point-of-sales systems and fuel dispensers at gas stations. One costly and irritating problem is “truck rerolls”: service calls that have to be rescheduled because the technician lacks the tools, parts, or expertise for a particular issue. After analyzing data on service calls, the AI software showed how to reduce truck rerolls by 20 percent, a rate that should continue to improve as the software learns to recognize more patterns.

Assisted intelligence apps often involve computer models of complex realities that allow businesses to test decisions with less risk. For example, one auto manufacturer has developed a simulation of consumer behavior, incorporating data about the types of trips people make, the ways those affect supply and demand for motor vehicles, and the variations in those patterns for different city topologies, marketing approaches, and vehicle price ranges. The model spells out more than 200,000 variations for the automaker to consider and simulates the potential success of any tested variation, thus assisting in the design of car launches. As the automaker introduces new cars and the simulator incorporates the data on outcomes from each launch, the model’s predictions will become ever more accurate.

AI-based packages of this sort are available on more and more enterprise software platforms. Success with assisted intelligence should lead to improvements in conventional business metrics such as labor productivity, revenues or margins per employee, and average time to completion for processes. Much of the cost involved is in the staff you hire, who must be skilled at marshaling and interpreting data. To evaluate where to deploy assisted intelligence, consider two questions: What products or services could you easily make more marketable if they were more automatically responsive to your customers? Which of your current processes and practices, including your decision-making practices, would be more powerful with more intelligence?

Augmented Intelligence

Augmented intelligence software lends new capability to human activity, permitting enterprises to do things they couldn’t do before. Unlike assisted intelligence, it fundamentally alters the nature of the task, and business models change accordingly.

For example, Netflix uses machine learning algorithms to do something media has never done before: suggest choices customers would probably not have found themselves, based not just on the customer’s patterns of behavior, but on those of the audience at large. A Netflix user, unlike a cable TV pay-per-view customer, can easily switch from one premium video to another without penalty, after just a few minutes. This gives consumers more control over their time. They use it to choose videos more tailored to the way they feel at any given moment. Every time that happens, the system records that observation and adjusts its recommendation list — and it enables Netflix to tailor its next round of videos to user preferences more accurately. This leads to reduced costs and higher profits per movie, and a more enthusiastic audience, which then enables more investments in personalization (and AI). Left outside this virtuous circle are conventional advertising and television networks. No wonder other video channels, such as HBO and Amazon, as well as recorded music channels such as Spotify, have moved to similar models.

Over time, as algorithms grow more sophisticated, the symbiotic relationship between human and AI will further change entertainment industry practices. The unit of viewing decision will probably become the scene, not the story; algorithms will link scenes to audience emotions. A consumer might ask to see only scenes where a Meryl Streep character is falling in love, or to trace a particular type of swordplay from one action movie to another. Data accumulating from these choices will further refine the ability of the entertainment industry to spark people’s emotions, satisfy their curiosity, and gain their loyalty.

Another current use of augmented intelligence is in legal research. Though most cases are searchable online, finding relevant precedents still requires many hours of sifting through past opinions. Luminance, a startup specializing in legal research, can run through thousands of cases in a very short time, providing inferences about their relevance to a current proceeding. Systems like these don’t yet replace human legal research. But they dramatically reduce the rote work conducted by associate attorneys, a job rated as the least satisfying in the United States. Similar applications are emerging for other types of data sifting, including financial audits, interpreting regulations, finding patterns in epidemiological data, and (as noted above) farming.

To develop applications like these, you’ll need to marshal your own imagination to look for products, services, or processes that would not be possible at all without AI. For example, an AI system can track a wide number of product features, warranty costs, repeat purchase rates, and more general purchasing metrics, bringing only unusual or noteworthy correlations to your attention. Are a high number of repairs associated with a particular region, material, or line of products? Could you use this information to redesign your products, avoid recalls, or spark innovation in some way?

The success of an augmented intelligence effort depends on whether it has enabled your company to do new things. To assess this capability, track your margins, innovation cycles, customer experience, and revenue growth as potential proxies. Also watch your impact on disruption: Are your new innovations doing to some part of the business ecosystem what, say, ride-hailing services are doing to conventional taxi companies?

You won’t find many off-the-shelf applications for augmented intelligence. They involve advanced forms of machine learning and natural language processing, plus specialized interfaces tailored to your company and industry. However, you can build bespoke augmented intelligence applications on cloud-based enterprise platforms, most of which allow modifications in open source code. Given the unstructured nature of your most critical decision processes, an augmented intelligence application would require voluminous historical data from your own company, along with data from the rest of your industry and related fields (such as demographics). This will help the system distinguish external factors, such as competition and economic conditions, from the impact of your own decisions.

The greatest change from augmented intelligence may be felt by senior decision makers, as the new models often give them new alternatives to consider that don’t match their past experience or gut feelings. They should be open to those alternatives, but also skeptical. AI systems are not infallible; just like any human guide, they must show consistency, explain their decisions, and counter biases, or they will lose their value.

Autonomous Intelligence

Very few autonomous intelligence systems — systems that make decisions without direct human involvement or oversight — are in widespread use today. Early examples include automated trading in the stock market (about 75 percent of Nasdaq trading is conducted autonomously) and facial recognition. In some circumstances, algorithms are better than people at identifying other people. Other early examples include robots that dispose of bombs, gather deep-sea data, maintain space stations, and perform other tasks inherently unsafe for people.

The most eagerly anticipated forms of autonomous intelligence — self-driving cars and full-fledged language translation programs — are not yet ready for general use. The closest autonomous service so far is Tencent’s messaging and social media platform WeChat, which has close to 800 million daily active users, most of them in China. The program, which was designed primarily for use on smartphones, offers relatively sophisticated voice recognition, Chinese-to-English language translation, facial recognition (including suggestions of celebrities who look like the person holding the phone), and virtual bot friends that can play guessing games. Notwithstanding their cleverness and their pioneering use of natural language processing, these are still niche applications, and still very limited by technology. Some of the most popular AI apps, for example, are small, menu- and rule-driven programs, which conduct fairly rudimentary conversations around a limited group of options.

Despite the lead time required to bring the technology further along, any business prepared to base a strategy on advanced digital technology should be thinking seriously about autonomous intelligence now. The Internet of Things will generate vast amounts of information, more than humans can reasonably interpret. In commercial aircraft, for example, so much flight data is gathered that engineers can’t process it all; thus, Boeing has announced a $7.5 million partnership with Carnegie Mellon University, along with other efforts to develop AI systems that can, for example, predict when airplanes will need maintenance. Autonomous intelligence’s greatest challenge may not be technological at all — it may be companies’ ability to build in enough transparency for people to trust these systems to act in their best interest.

First Steps

As you contemplate the introduction of artificial intelligence, articulate what mix of the three approaches works best for you.

• Are you primarily interested in upgrading your existing processes, reducing costs, and improving productivity? If so, then start with assisted intelligence, probably with a small group of services from a cloud-based provider.

• Do you seek to build your business around something new — responsive and self-driven products, or services and experiences that incorporate AI? Then pursue an augmented intelligence approach, probably with more complex AI applications resident on the cloud.

• Are you developing a genuinely new technology? Most companies will be better off primarily using someone else’s AI platforms, but if you can justify building your own, you may become one of the leaders in your market.

The transition among these forms of AI is not clean-cut; they sit on a continuum. In developing their own AI strategy, many companies begin somewhere between assisted and augmented, while expecting to move toward autonomous eventually (see Exhibit 2).

Though investments in AI may seem expensive now, the costs will decline over the next 10 years as the software becomes more commoditized. “As this technology continues to mature,” writes Daniel Eckert, a managing director in emerging technology services for PwC US, “we should see the price adhere toward a utility model and flatten out. We expect a tiered pricing model to be introduced: a free (or freemium model) for simple activities, and a premium model for discrete, business-differentiating services.”

AI is often sold on the premise that it will replace human labor at lower cost — and the effect on employment could be devastating, though no one knows for sure. Carl Benedikt Frey and Michael Osborne of Oxford University’s engineering school have calculated that AI will put 47 percent of the jobs in the U.S. at risk; a 2016 Forrester research report estimated it at 6 percent, at least by 2025. On the other hand, Baidu Research head (and deep learning pioneer) Andrew Ng recently said, “AI is the new electricity,” meaning that it will be found everywhere and create new jobs that weren’t imaginable before its appearance.

At the same time that AI threatens the loss of an almost unimaginable number of jobs, it is also a hungry, unsatisfied employer. The lack of capable talent — people skilled in deep learning technology and analytics — may well turn out to be the biggest obstacle for large companies. The greatest opportunities may thus be for independent businesspeople, including farmers like Jeff Heepke, who no longer need scale to compete with large companies, because AI has leveled the playing field.

It is still too early to say which types of companies will be the most successful in this area — and we don’t yet have an AI model to predict it for us. In the end, we cannot even say for sure that the companies that enter the field first will be the most successful. The dominant players will be those that, like Climate Corporation, Oscar W. Larson, Netflix, and many other companies large and small, have taken AI to heart as a way to become far more capable, in a far more relevant way, than they otherwise would ever be.

Author Profile:

  • Anand Rao is a principal with PwC US based in Boston. He is an innovation leader for PwC’s data and analytics consulting services. He holds a Ph.D. in artificial intelligence from the University of Sydney and was formerly chief research scientist at the Australian Artificial Intelligence Institute.
  • Also contributing to this article were PwC principal and assurance innovation leader Michael Baccala, PwC senior research fellow Alan Morrison, and writer Michael Fitzgerald.

Resources

AI is the new UI – Tech Vision 2017 Trend 1

Moving beyond a back-end tool for the enterprise, artificial intelligence (AI) is taking on more sophisticated roles within technology interfaces. From autonomous driving vehicles that use computer vision, to live translations made possible by machine learning, AI is making every interface both simple and smart–and setting a high bar for how future experiences will work. AI is poised to act as the face of a company’s digital brand and a key differentiator – and become a core competency demanding of C-level investment and strategy.

Source: Accenture Technology