Inteligência Competitiva Tecnológica: McKinsey’s State Of Machine Learning And AI, 2017

These and other findings are from the McKinsey Global Institute Study, and discussion paper, Artificial Intelligence, The Next Digital Frontier (80 pp., PDF, free, no opt-in) published last month. McKinsey Global Institute published an article summarizing the findings titled   How Artificial Intelligence Can Deliver Real Value To Companies. McKinsey interviewed more than 3,000 senior executives on the use of AI technologies, their companies’ prospects for further deployment, and AI’s impact on markets, governments, and individuals.  McKinsey Analytics was also utilized in the development of this study and discussion paper.

Key takeaways from the study include the following:

  • Tech giants including Baidu and Google spent between $20B to $30B on AI in 2016, with 90% of this spent on R&D and deployment, and 10% on AI acquisitions. The current rate of AI investment is 3X the external investment growth since 2013. McKinsey found that 20% of AI-aware firms are early adopters, concentrated in the high-tech/telecom, automotive/assembly and financial services industries. The graphic below illustrates the trends the study team found during their analysis.

Source: McKinsey Global Institute, Artificial Intelligence, The Next Digital Frontier

  • AI is turning into a race for patents and intellectual property (IP) among the world’s leading tech companies. McKinsey found that only a small percentage (up to 9%) of Venture Capital (VC), Private Equity (PE), and other external funding. Of all categories that have publically available data, M&A grew the fastest between 2013 And 2016 (85%).The report cites many examples of internal development including Amazon’s investments in robotics and speech recognition, and Salesforce on virtual agents and machine learning. BMW, Tesla, and Toyota lead auto manufacturers in their investments in robotics and machine learning for use in driverless cars. Toyota is planning to invest $1B in establishing a new research institute devoted to AI for robotics and driverless vehicles.

Source: McKinsey Global Institute, Artificial Intelligence, The Next Digital Frontier

  • McKinsey estimates that total annual external investment in AI was between $8B to $12B in 2016, with machine learning attracting nearly 60% of that investment. Robotics and speech recognition are two of the most popular investment areas. Investors are most favoring machine learning startups due to quickness code-based start-ups have at scaling up to include new features fast. Software-based machine learning startups are preferred over their more cost-intensive machine-based robotics counterparts that often don’t have their software counterparts do. As a result of these factors and more, Corporate M&A is soaring in this area with the Compound Annual Growth Rate (CAGR) reaching approximately 80% from 20-13 to 2016. The following graphic illustrates the distribution of external investments by category from the study.

Source: McKinsey Global Institute, Artificial Intelligence, The Next Digital Frontier

  • High tech, telecom, and financial services are the leading early adopters of machine learning and AI. These industries are known for their willingness to invest in new technologies to gain competitive and internal process efficiencies. Many startups have also had their start by concentrating on the digital challenges of this industries as well. The MGI Digitization Index is a GDP-weighted average of Europe and the United States. See Appendix B of the study for a full list of metrics and explanation of methodology. McKinsey also created an overall AI index shown in the first column below that compares key performance indicators (KPIs) across assets, usage, and labor where AI could make a contribution. The following is a heat map showing the relative level of AI adoption by industry and key area of asset, usage, and labor category.

Source: McKinsey Global Institute, Artificial Intelligence, The Next Digital Frontier

  • McKinsey predicts High Tech, Communications, and Financial Services will be the leading industries to adopt AI in the next three years. The competition for patents and intellectual property (IP) in these three industries is accelerating. Devices, products and services available now and on the roadmaps of leading tech companies will over time reveal the level of innovative activity going on in their R&D labs today. In financial services, for example, there are clear benefits from improved accuracy and speed in AI-optimized fraud-detection systems, forecast to be a $3B market in 2020. The following graphic provides an overview of sectors or industries leading in AI addition today and who intend to grow their investments the most in the next three years.

Source: McKinsey Global Institute, Artificial Intelligence, The Next Digital Frontier

  • Healthcare, financial services, and professional services are seeing the greatest increase in their profit margins as a result of AI adoption. McKinsey found that companies who benefit from senior management support for AI initiatives have invested in infrastructure to support its scale and have clear business goals achieve 3 to 15% percentage point higher profit margin. Of the over 3,000 business leaders who were interviewed as part of the survey, the majority expect margins to increase by up to 5% points in the next year.

Source: McKinsey Global Institute, Artificial Intelligence, The Next Digital Frontier

  • Amazon has achieved impressive results from its $775 million acquisition of Kiva, a robotics company that automates picking and packing according to the McKinsey study. “Click to ship” cycle time, which ranged from 60 to 75 minutes with humans, fell to 15 minutes with Kiva, while inventory capacity increased by 50%. Operating costs fell an estimated 20%, giving a return of close to 40% on the original investment
  • Netflix has also achieved impressive results from the algorithm it uses to personalize recommendations to its 100 million subscribers worldwide. Netflix found that customers, on average, give up 90 seconds after searching for a movie. By improving search results, Netflix projects that they have avoided canceled subscriptions that would reduce its revenue by $1B annually.

Marketers need to start giving millennials what they want: artificial intelligence

Artificial intelligence is no longer a figment of the sci-fi writer’s imagination, nor is it something to be feared – Siri made sure of that. As a generation grows up with AI in their pockets, marketers need to stop shying away from it and embrace it, writes Mailee Creacy.

Elon Musk and Mark Zuckerberg are the latest high-profile business leaders to butt heads over the use of artificial intelligence, with the two trading barbs over the regulation of AI.

Musk has issued warnings about the potential dangers of the technology while Zuckerberg believes people shouldn’t slow down progress.

Zuckerberg is a fan

They aren’t the first to have opposing opinions on the topic, with Microsoft’s Bill Gates and Professor Stephen Hawking also advising caution, while Amazon’s CEO Jeff Bezos and IBM Watson SVP, David Kelly, have encouraged the development of the technology.

While industry leaders are touting their opinions about the future of AI, the general public also have strong opinions about the way they believe it will affect their lives.

The phrase ‘artificial intelligence’ may prompt fear in some people’s minds – perhaps preconditioned by Hollywood and sci-fi dramatisationsof malevolent robots – however, most people are accepting of the use of AI in their everyday lives.

Whether it be the apps on our phone, the virtual personal assistant in our living room, or the self-driving cars we’re increasingly seeing in the news and soon on our streets, we are incrementally becoming more exposed to artificial intelligence in its varied forms.

The generation leading the AI transformation – millennials – believe there is nothing to fear from AI.

It’s only natural that sentiments towards artificial intelligence will reflect popular usage and exposure. Millennial males are typically the quickest adopters of new technology, so it follows that they’re the demographic most at ease with the concept of artificial intelligence.

Research released on consumer perceptions of AI proves that younger generations are open-minded when it comes to the use of AI. The survey showed millennial males are most likely to find artificial intelligence exciting (80%), least likely to be fearful of it (only 13%), and most likely to think that it will improve their job in the next five years (47%).

What does this mean for marketers?

For brand marketers, capturing the attention of millennials has long been the holy grail. Multiple screen and multiple channel users by nature, this generation demands more from the brands in their lives. Blanket marketing messages and one-way broadcasts don’t cut it, and advertisers reliant on the interruption-based practices of the past increasingly find themselves lacking in engagement.

This generation expects to be spoken to in a personalised way and accepts that businesses will anticipate their needs in advance. Netflix queues up TV shows they might like to binge watch. Google identifies when they should leave home in order to beat traffic. Now it seems this acceptance of technology that anticipates their needs extends to advertising.

The Consumer Perceptions of AI research proved that younger generations are open-minded when it comes to brands using artificial intelligence to inform their buying decisions. A clear majority of Australian millennials (74%) said they prefer brands to provide personalised advertising and offers.

This is great news for marketers who advertise online. It shows us that younger generations have become so accustomed to businesses using artificial intelligence to make their lives better that they also understand it may be used to present them with their ideal promotions, products and services.

They seem not only accepting, but expectant, and understand the data exchange that takes place now between brands and consumers – they provide information about themselves in return for interesting, entertaining and, most importantly, relevant content.

Mailee Creacy is Rocket Fuel’s general manager. 

Source: MAILEE CREACY, Mumbrella, August 7, 2017 2:57

A Strategist’s Guide to Artificial Intelligence by Anand Rao

Illustration by The Heads of State

Jeff Heepke knows where to plant corn on his 4,500-acre farm in Illinois because of artificial intelligence (AI). He uses a smartphone app called Climate Basic, which divides Heepke’s farmland (and, in fact, the entire continental U.S.) into plots that are 10 meters square. The app draws on local temperature and erosion records, expected precipitation, soil quality, and other agricultural data to determine how to maximize yields for each plot. If a rainy cold front is expected to pass by, Heepke knows which areas to avoid watering or irrigating that afternoon. As the U.S. Department of Agriculture noted, this use of artificial intelligence across the industry has produced the largest crops in the country’s history.

Climate Corporation, the Silicon Valley–based developer of Climate Basic, also offers a more advanced AI app that operates autonomously. If a storm hits a region, or a drought occurs, it adjusts local yield numbers downward. Farmers who have bought insurance to supplement their government coverage get a check; no questions asked, no paper filing necessary. The insurance companies and farmers both benefit from having a much less labor-intensive, more streamlined, and less expensive automated claims process.

Monsanto paid nearly US$1 billion to buy Climate Corporation in 2013, giving the company’s models added legitimacy. Since then, Monsanto has continued to upgrade the AI models, integrating data from farm equipment and sensors planted in the fields so that they improve their accuracy and insight as more data is fed into them. One result is a better understanding of climate change and its effects — for example, the northward migration of arable land for corn, or the increasing frequency of severe storms.

Applications like this are typical of the new wave of artificial intelligence in business. AI is generating new approaches to business models, operations, and the deployment of people that are likely to fundamentally change the way business operates. And if it can transform an earthbound industry like agriculture, how long will it be before your company is affected?

An Unavoidable Opportunity

Many business leaders are keenly aware of the potential value of artificial intelligence, but are not yet poised to take advantage of it. In PwC’s 2017 Digital IQ survey of senior executives worldwide, 54 percent of the respondents said they were making substantial investments in AI today. But only 20 percent said their organizations had the skills necessary to succeed with this technology (see “Winning with Digital Confidence,” by Chris Curran and Tom Puthiyamadam).

Reports on artificial intelligence tend to portray it as either a servant, making all technology more responsive, or an overlord, eliminating jobs and destroying privacy. But for business decision makers, AI is primarily an enabler of productivity. It will eliminate jobs, to be sure, but it will also fundamentally change work processes and might create jobs in the long run. The nature of decision making, collaboration, creative art, and scientific research will all be affected; so will enterprise structures. Technological systems, including potentially your products and services, as well as your office and factory equipment, will respond to people (and one another) in ways that feel as if they are coming to life.

In their book Artificial Intelligence: A Modern Approach (Pearson, 1995), Stuart Russell and Peter Norvig define AI as “the designing and building of intelligent agents that receive percepts from the environment and take actions that affect that environment.” The most critical difference between AI and general-purpose software is in the phrase “take actions.” AI enables machines to respond on their own to signals from the world at large, signals that programmers do not directly control and therefore can’t anticipate.

The fastest-growing category of AI is machine learning, or the ability of software to improve its own activity by analyzing interactions with the world at large (see “The Road to Deep Learning,” below). This technology, which has been a continual force in the history of computing since the 1940s, has grown dramatically in sophistication during the last few years.

The Road to Deep Learning

This may be the first moment in AI’s history when a majority of experts agree the technology has practical value. From its conceptual beginnings in the 1950s, led by legendary computer scientists such as Marvin Minsky and John McCarthy, its future viability has been the subject of fierce debate. As recently as 2000, the most proficient AI system was roughly comparable, in complexity, to the brain of a worm. Then, as high-bandwidth networking, cloud computing, and high-powered graphics-enabled microprocessors emerged, researchers began building multilayered neural networks — still extremely slow and limited in comparison with natural brains, but useful in practical ways.

The best-known AI triumphs — in which software systems beat expert human players in Jeopardy, chess, Go, poker, and soccer — differ from most day-to-day business applications. These games have prescribed rules and well-defined outcomes; every game ends in a win, loss, or tie. The games are also closed-loop systems: They affect only the players, not outsiders. The software can be trained through multiple failures with no serious risks. You can’t say the same of an autonomous vehicle crash, a factory failure, or a mistranslation.

There are currently two main schools of thought on how to develop the inference capabilities necessary for AI programs to navigate through the complexities of everyday life. In both, programs learn from experience — that is, the responses and reactions they get influence the way the programs act thereafter. The first approach uses conditional instructions (also known as heuristics) to accomplish this. For instance, an AI bot would interpret the emotions in a conversation by following a program that instructed it to start by checking for emotions that were evident in the recent past.

The second approach is known as machine learning. The machine is taught, using specific examples, to make inferences about the world around it. It then builds its understanding through this inference-making ability, without following specific instructions to do so. The Google search engine’s “next-word completion” feature is a good example of machine learning. Type in the word artificial, and several suggestions for the next word will appear, perhaps intelligence, selection, and insemination. No one has programmed it to seek those complements. Google chose the strategy of looking for the three words most frequently typed after artificial. With huge amounts of data available, machine learning can provide uncanny accuracy about patterns of behavior.

The type of machine learning called deep learning has become increasingly important. A deep learning system is a multilayered neural network that learns representations of the world and stores them as a nested hierarchy of concepts many layers deep. For example, when processing thousands of images, it recognizes objects based on a hierarchy of simpler building blocks: straight lines and curved lines at the basic level, then eyes, mouths, and noses, and then faces, and then specific facial features. Besides image recognition, deep learning appears to be a promising way to approach complex challenges such as speech comprehension, human–machine conversation, language translation, and vehicle navigation (see Exhibit A).

Though it is the closest machine to a human brain, a deep learning neural network is not suitable for all problems. It requires multiple processors with enormous computing power, far beyond conventional IT architecture; it will learn only by processing enormous amounts of data; and its decision processes are not transparent.

AI applications in daily use include all smartphone digital assistants, email programs that sort entries by importance, voice recognition systems, image recognition apps such as Facebook Picture Search, digital assistants such as Amazon Echo and Google Home, and much of the emerging Industrial Internet. Some AI apps are targeted at minor frustrations — DoNotPay, an online legal bot, has reversed thousands of parking tickets — and others, such as connected car and language translation technologies, represent fundamental shifts in the way people live. A growing number are aimed at improving human behavior; for instance, GM’s 2016 Chevrolet Malibu feeds data from sensors into a backseat driver–like guidance system for teenagers at the wheel.

Despite all this activity, the market for AI is still small. Market research firm Tractica estimated 2016 revenues at just $644 million. But it expects hockey stick–style growth, reaching $15 billion by 2022 and accelerating thereafter. In late 2016, there were about 1,500 AI-related startups in the U.S. alone, and total funding in 2016 reached a record $5 billion. Google, Facebook, Microsoft,, and other tech companies are snapping up AI software companies, and large, established companies are recruiting deep learning talent and, like Monsanto, buying AI companies specializing in their markets. To make the most of this technology in your enterprise, consider the three main ways that businesses can or will use AI:

• Assisted intelligence, now widely available, improves what people and organizations are already doing.

• Augmented intelligence, emerging today, enables organizations and people to do things they couldn’t otherwise do.

• Autonomous intelligence, being developed for the future, creates and deploys machines that act on their own.

Many companies will make investments in all three during the next few years, drawing from a wide variety of applications (see Exhibit 1). They complement one another, but require different types of investment, different staffing considerations, and different business models.

Assisted Intelligence

Assisted intelligence amplifies the value of existing activity. For example, Google’s Gmail sorts incoming email into “Primary,” “Social,” and “Promotion” default tabs. The algorithm, trained with data from millions of other users’ emails, makes people more efficient without changing the way they use email or altering the value it provides.

Assisted intelligence tends to involve clearly defined, rules-based, repeatable tasks. These include automated assembly lines and other uses of physical robots; robotic process automation, in which software-based agents simulate the online activities of a human being; and back-office functions such as billing, finance, and regulatory compliance. This form of AI can be used to verify and cross-check data — for example, when paper checks are read and verified by a bank’s ATM. Assisted intelligence has already become common in some enterprise software processes. In “opportunity to order” (basic sales) and “order to cash” (receiving and processing customer orders), the software offers guidance and direction that was formerly available only from people.

The Oscar W. Larson Company used assisted intelligence to improve its field service operations. This is a 70-plus-year-old family-owned general contractor, which among other services to the oil and gas industry, provides maintenance and repair for point-of-sales systems and fuel dispensers at gas stations. One costly and irritating problem is “truck rerolls”: service calls that have to be rescheduled because the technician lacks the tools, parts, or expertise for a particular issue. After analyzing data on service calls, the AI software showed how to reduce truck rerolls by 20 percent, a rate that should continue to improve as the software learns to recognize more patterns.

Assisted intelligence apps often involve computer models of complex realities that allow businesses to test decisions with less risk. For example, one auto manufacturer has developed a simulation of consumer behavior, incorporating data about the types of trips people make, the ways those affect supply and demand for motor vehicles, and the variations in those patterns for different city topologies, marketing approaches, and vehicle price ranges. The model spells out more than 200,000 variations for the automaker to consider and simulates the potential success of any tested variation, thus assisting in the design of car launches. As the automaker introduces new cars and the simulator incorporates the data on outcomes from each launch, the model’s predictions will become ever more accurate.

AI-based packages of this sort are available on more and more enterprise software platforms. Success with assisted intelligence should lead to improvements in conventional business metrics such as labor productivity, revenues or margins per employee, and average time to completion for processes. Much of the cost involved is in the staff you hire, who must be skilled at marshaling and interpreting data. To evaluate where to deploy assisted intelligence, consider two questions: What products or services could you easily make more marketable if they were more automatically responsive to your customers? Which of your current processes and practices, including your decision-making practices, would be more powerful with more intelligence?

Augmented Intelligence

Augmented intelligence software lends new capability to human activity, permitting enterprises to do things they couldn’t do before. Unlike assisted intelligence, it fundamentally alters the nature of the task, and business models change accordingly.

For example, Netflix uses machine learning algorithms to do something media has never done before: suggest choices customers would probably not have found themselves, based not just on the customer’s patterns of behavior, but on those of the audience at large. A Netflix user, unlike a cable TV pay-per-view customer, can easily switch from one premium video to another without penalty, after just a few minutes. This gives consumers more control over their time. They use it to choose videos more tailored to the way they feel at any given moment. Every time that happens, the system records that observation and adjusts its recommendation list — and it enables Netflix to tailor its next round of videos to user preferences more accurately. This leads to reduced costs and higher profits per movie, and a more enthusiastic audience, which then enables more investments in personalization (and AI). Left outside this virtuous circle are conventional advertising and television networks. No wonder other video channels, such as HBO and Amazon, as well as recorded music channels such as Spotify, have moved to similar models.

Over time, as algorithms grow more sophisticated, the symbiotic relationship between human and AI will further change entertainment industry practices. The unit of viewing decision will probably become the scene, not the story; algorithms will link scenes to audience emotions. A consumer might ask to see only scenes where a Meryl Streep character is falling in love, or to trace a particular type of swordplay from one action movie to another. Data accumulating from these choices will further refine the ability of the entertainment industry to spark people’s emotions, satisfy their curiosity, and gain their loyalty.

Another current use of augmented intelligence is in legal research. Though most cases are searchable online, finding relevant precedents still requires many hours of sifting through past opinions. Luminance, a startup specializing in legal research, can run through thousands of cases in a very short time, providing inferences about their relevance to a current proceeding. Systems like these don’t yet replace human legal research. But they dramatically reduce the rote work conducted by associate attorneys, a job rated as the least satisfying in the United States. Similar applications are emerging for other types of data sifting, including financial audits, interpreting regulations, finding patterns in epidemiological data, and (as noted above) farming.

To develop applications like these, you’ll need to marshal your own imagination to look for products, services, or processes that would not be possible at all without AI. For example, an AI system can track a wide number of product features, warranty costs, repeat purchase rates, and more general purchasing metrics, bringing only unusual or noteworthy correlations to your attention. Are a high number of repairs associated with a particular region, material, or line of products? Could you use this information to redesign your products, avoid recalls, or spark innovation in some way?

The success of an augmented intelligence effort depends on whether it has enabled your company to do new things. To assess this capability, track your margins, innovation cycles, customer experience, and revenue growth as potential proxies. Also watch your impact on disruption: Are your new innovations doing to some part of the business ecosystem what, say, ride-hailing services are doing to conventional taxi companies?

You won’t find many off-the-shelf applications for augmented intelligence. They involve advanced forms of machine learning and natural language processing, plus specialized interfaces tailored to your company and industry. However, you can build bespoke augmented intelligence applications on cloud-based enterprise platforms, most of which allow modifications in open source code. Given the unstructured nature of your most critical decision processes, an augmented intelligence application would require voluminous historical data from your own company, along with data from the rest of your industry and related fields (such as demographics). This will help the system distinguish external factors, such as competition and economic conditions, from the impact of your own decisions.

The greatest change from augmented intelligence may be felt by senior decision makers, as the new models often give them new alternatives to consider that don’t match their past experience or gut feelings. They should be open to those alternatives, but also skeptical. AI systems are not infallible; just like any human guide, they must show consistency, explain their decisions, and counter biases, or they will lose their value.

Autonomous Intelligence

Very few autonomous intelligence systems — systems that make decisions without direct human involvement or oversight — are in widespread use today. Early examples include automated trading in the stock market (about 75 percent of Nasdaq trading is conducted autonomously) and facial recognition. In some circumstances, algorithms are better than people at identifying other people. Other early examples include robots that dispose of bombs, gather deep-sea data, maintain space stations, and perform other tasks inherently unsafe for people.

The most eagerly anticipated forms of autonomous intelligence — self-driving cars and full-fledged language translation programs — are not yet ready for general use. The closest autonomous service so far is Tencent’s messaging and social media platform WeChat, which has close to 800 million daily active users, most of them in China. The program, which was designed primarily for use on smartphones, offers relatively sophisticated voice recognition, Chinese-to-English language translation, facial recognition (including suggestions of celebrities who look like the person holding the phone), and virtual bot friends that can play guessing games. Notwithstanding their cleverness and their pioneering use of natural language processing, these are still niche applications, and still very limited by technology. Some of the most popular AI apps, for example, are small, menu- and rule-driven programs, which conduct fairly rudimentary conversations around a limited group of options.

Despite the lead time required to bring the technology further along, any business prepared to base a strategy on advanced digital technology should be thinking seriously about autonomous intelligence now. The Internet of Things will generate vast amounts of information, more than humans can reasonably interpret. In commercial aircraft, for example, so much flight data is gathered that engineers can’t process it all; thus, Boeing has announced a $7.5 million partnership with Carnegie Mellon University, along with other efforts to develop AI systems that can, for example, predict when airplanes will need maintenance. Autonomous intelligence’s greatest challenge may not be technological at all — it may be companies’ ability to build in enough transparency for people to trust these systems to act in their best interest.

First Steps

As you contemplate the introduction of artificial intelligence, articulate what mix of the three approaches works best for you.

• Are you primarily interested in upgrading your existing processes, reducing costs, and improving productivity? If so, then start with assisted intelligence, probably with a small group of services from a cloud-based provider.

• Do you seek to build your business around something new — responsive and self-driven products, or services and experiences that incorporate AI? Then pursue an augmented intelligence approach, probably with more complex AI applications resident on the cloud.

• Are you developing a genuinely new technology? Most companies will be better off primarily using someone else’s AI platforms, but if you can justify building your own, you may become one of the leaders in your market.

The transition among these forms of AI is not clean-cut; they sit on a continuum. In developing their own AI strategy, many companies begin somewhere between assisted and augmented, while expecting to move toward autonomous eventually (see Exhibit 2).

Though investments in AI may seem expensive now, the costs will decline over the next 10 years as the software becomes more commoditized. “As this technology continues to mature,” writes Daniel Eckert, a managing director in emerging technology services for PwC US, “we should see the price adhere toward a utility model and flatten out. We expect a tiered pricing model to be introduced: a free (or freemium model) for simple activities, and a premium model for discrete, business-differentiating services.”

AI is often sold on the premise that it will replace human labor at lower cost — and the effect on employment could be devastating, though no one knows for sure. Carl Benedikt Frey and Michael Osborne of Oxford University’s engineering school have calculated that AI will put 47 percent of the jobs in the U.S. at risk; a 2016 Forrester research report estimated it at 6 percent, at least by 2025. On the other hand, Baidu Research head (and deep learning pioneer) Andrew Ng recently said, “AI is the new electricity,” meaning that it will be found everywhere and create new jobs that weren’t imaginable before its appearance.

At the same time that AI threatens the loss of an almost unimaginable number of jobs, it is also a hungry, unsatisfied employer. The lack of capable talent — people skilled in deep learning technology and analytics — may well turn out to be the biggest obstacle for large companies. The greatest opportunities may thus be for independent businesspeople, including farmers like Jeff Heepke, who no longer need scale to compete with large companies, because AI has leveled the playing field.

It is still too early to say which types of companies will be the most successful in this area — and we don’t yet have an AI model to predict it for us. In the end, we cannot even say for sure that the companies that enter the field first will be the most successful. The dominant players will be those that, like Climate Corporation, Oscar W. Larson, Netflix, and many other companies large and small, have taken AI to heart as a way to become far more capable, in a far more relevant way, than they otherwise would ever be.

Author Profile:

  • Anand Rao is a principal with PwC US based in Boston. He is an innovation leader for PwC’s data and analytics consulting services. He holds a Ph.D. in artificial intelligence from the University of Sydney and was formerly chief research scientist at the Australian Artificial Intelligence Institute.
  • Also contributing to this article were PwC principal and assurance innovation leader Michael Baccala, PwC senior research fellow Alan Morrison, and writer Michael Fitzgerald.


AI is the new UI – Tech Vision 2017 Trend 1

Moving beyond a back-end tool for the enterprise, artificial intelligence (AI) is taking on more sophisticated roles within technology interfaces. From autonomous driving vehicles that use computer vision, to live translations made possible by machine learning, AI is making every interface both simple and smart–and setting a high bar for how future experiences will work. AI is poised to act as the face of a company’s digital brand and a key differentiator – and become a core competency demanding of C-level investment and strategy.

Source: Accenture Technology

Inteligência Competitiva – Cenários: Artificial intelligence will save jobs, not destroy them. Here's how


Image: REUTERS/Thomas Peter

In the latter half of the 1980s a debate ensued between two camps of economists roughly grouped around the views of Edward Prescott, on the one hand, and Lawrence Summers, on the other.

Prescott argued that by and large, the booms and busts of the economic cycle were due to “technological shocks”; and Summers dismissed the notion as speculation not supported by evidence.

Over the years, the ‘technological shock’ model of economic shifts (TS) has surfaced over and over again in many forms, rising to the occasion whenever the debate over cycles rears its head.

Today, TS has penetrated the discussion on the nexus between Artificial Intelligence (AI) and the employment situation in advanced economies, with some AI enthusiasts like former Googler Sebastian Thrun offering fodder to those economists who are pessimistic about the impact of technology on the job market.

The famous Oxford Martin study by Frey and Osborne in 2013 concluded that “[a]ccording to our estimates, about 47 percent of total US employment is at risk.” It did offer the somewhat elite-reassuring view that the highest paid jobs and those requiring the highest educational attainment (and the two categories are often conjoined) might be safe for a significant time. The question, of course, is: “for how much longer?”

To be blunt, trying to predict the future more than two decades at a stretch is more science fiction than anything else. This article will thus focus on the ‘near horizon’ rather than the ‘distant future’.

A full generation after the Prescott-Summers debate, the issue of ‘productivity’ remains central. In recent comments, Summers has pointed to an interesting anomaly: despite the significant withdrawal of many blue collar jobs from the US economy (and others like it), partly because of the march of automation, productivity growth has been unimpressive, or even anemic.

There are hints in the data, both formal and anecdotal, if one cares to look carefully that while technology may be improving the quality of life on the whole, its aggregate effect on enterprise efficiency could be exaggerated, which is precisely the suspicion that led me to this subject and to this article.

The subconscious trigger, however, must have been comparing the efficient flow of Fortnum & Mason’s human-manned checkout-tills in London’s Bond Street, with the relative bumbling at Sainsbury’s robot-manned checkouts less than a mile away.

At the time I made my impressions I was not aware of a damning study at the University of Leicester in 2015 which had found that the robot-checkout contraptions could trigger everything from aggressive behaviour to increased shoplifting, and that they were actually losing the supermarkets significant amounts of money.

When they first launched, the pitch was that auto-checkouts would save shoppers 500,000 hours of unnecessary queuing time. The reality today is depressingly different.

Brianna Lempesis, from San Diego, appears on a video screen on her
Image: REUTERS/Robert Galbraith

Similar attempts by Australian mining giants to induce human redundancy using robots have been beset with glitches, have led to lost production, and hasty retreats from the technology.

It is not surprising then that a careful look at the actual flow of R&D dollars into AI in many of the most tech savvy companies reveals a less prominent role for ‘hard automation’, defined roughly as ‘machine induced human redundancy (MIHRED), than is usually perceived to be the case.

Salesforce’s recently launched product, Einstein, focuses on helping salespeople write superior emails to targeted prospects. SAP’s HANA is integrating AI to help users better detect fraudulent transactions. Enlitic promises algorithms to make junior doctors read x-rays faster and more accurately, not to replace them. Affectiva wants to use its deep-learning kit to help empathy-challenged people become more emotively competent. It goes on and on.

At play here are two interlocking principles: the ideas of ‘digital bicephaly (dibicephaly)’ – a literal new ‘exended-hemisphere’ of the brain where accurate measurement, a behaviour largely alien to the human mind, can thrive on demand – and ‘cognitive exoskeletons (COGNEX)’ – a concept related to Flynn’s thesis of ‘new cognitive tools’ driving incremental increases in observed human IQ.

This is no Engelbart and Kurzweil style ‘machine-human symbiosis’ utopia however.

At the root of the cognex-dibicephaly vision of the future is, rather, a strong emphasis on the workplace and on the ‘mid-range’ scale of capabilities in both humans and machines (pseudo-AI). There are two contexts that converge on the same point.

Firstly, most people think of automation through the lens of assembly-line logic. Actually, ‘automation’ is a softer and more pervasive feature of all modern management. Every modern company in the world has been deploying more and more supply chain management (SCM), customer response management (CRM) and enterprise resource planning (ERP) systems in a bid to automate more and more functions. Rather than Human redundancy, improved HUMAN productivity has been the chief driver.

The problem though is that, as Denver-based Panorama Consulting has noticed, only 12% of companies report full satisfaction with their automation programs.

Gartner has found that 75% of all ERP implementations fail. In fact, Thomas Wailgum (a top executive of the American SAP Users Group) once estimated that the chances of a successful ERP implementation may be closer to 7%.

Poor automation outcomes make experimenting with innovative business models harder and prone to failure, and the single most cited cause is poor “personnel interfacing”. In every major lawsuit in the wake of a failed implementation, like Bridgestone’s $600 million suit against IBM, personnel-automation incongruity rise to the top of the pile. It has been widely observed that attempts to circumvent rather than enhance human input typically constitute the key failure points.

The question, it would seem then, is not how to remove humans from the chain altogether, but how to embed them more seamlessly.

A robot is seen in the automobile production line of the new Honda plant in Prachinburi, Thailand May 12, 2016. Image: REUTERS/Jorge Silva

The second point is the issue of ‘unfilled jobs’.

America alone has nearly 5.8 million of them. George Washington University’s Tara Sinclair is the lead author of a recent report that showed that a quarter of advertised jobs in the US and about a fifth in other rich countries like Canada and Germany were going unfilled.

The report correctly tied this mismatch of human skills and labour requirements with the sluggish growth in global productivity and thereby casts a more interesting complexion on the issue of human redundancy and artificial intelligence, at least in the near-term horizon.

In the same way that personnel inadequacies continue to undermine efforts to automate the enterprise, skills imbalances inflate unemployment rates and exaggerate the effect of efficiency-inducing technology. And both dynamics are strongest not in the unskilled or the superskilled segments (the tail-ends) but in the ‘middle-bulge’ of the employment curve.

It is reasonable to infer, given this background, that large-scale human redundancies caused by transhuman AI are fanciful, at least in the near-term horizon, given the actual performance of automation and the gaps in the enterprise today.

What is more likely is the proliferation of mid-tier AI systems transforming the capacity of mid-level skilled workers to better fill vacant jobs and to participate in human-critical automation of the enterprise, and in the search for novel business methods and models.

With superior virtual reality and machine-iteration systems, average food technologists can carry out a more varied range of biochemical explorations. Nurses can perform a wider range of imaging tests. Fashion design trainees can contribute more effectively to the fabric technology sourcing process.

And so on and so forth.

With improving personnel agility comes more nimble business models and an expansion of the job market.

Add these prospects to the potential productivity lift and the better synching of job openings and personnel availability and a whole new vision of what pro-human or cis-human AI might do for the job market emerges, one that is starkly different from the dystopian prophecies tethered to the rise of trans-human AI.

Written by Bright Simons, President, MPedigree, Published Tuesday 29 November 2016

The views expressed in this article are those of the author alone and not the World Economic Forum.

Predicting a Future Where the Future Is Routinely Predicted by Andrew W. Moore

Artificial intelligence systems will be able to give managers real-time insights about their business operations — as well as detect early warnings of problems before they occur.

Editor’s Note: This article is one of a special series of 14 commissioned essays MIT Sloan Management Review is publishing to celebrate the launch of our new Frontiers initiative. Each essay gives the author’s response to this question:

“Within the next five years, how will technology change the practice of management in a way we have not yet witnessed?”

Workers on the factory floor have suddenly gathered at a point along the production line. Some are scratching their heads. Others are gesticulating wildly. Most stand with their hands in their pockets. Something is wrong, and no one has thought to call management.

In the near future, scenes like this one will be obsolete. Thanks to advances in artificial intelligence (AI), managers will be alerted to workplace anomalies as soon they occur. Unusual behaviors will be identified in real time by cameras and image-processing software that continuously analyze and comprehend scenes across the enterprise.

The hunch-based bets of the past already are giving way to far more reliable data-informed decisions. But AI will take this further. By analyzing new types of data, including real-time video and a range of other inputs, AI systems will be able to provide managers with insights about what is happening in their businesses at any moment in time and, even more significantly, detect early warnings of bigger problems that have yet to materialize.

As a researcher, I learned to appreciate the value of early warnings some years ago, while developing algorithms for analyzing data from hospital emergency rooms and drugstores. We discovered that we could alert public health officials to potential epidemics and even the possibility of biological warfare attacks, giving them time to take countermeasures to slow the spread of disease.

Similar analytic techniques are being deployed to detect early signs of problems in aircraft. The detailed maintenance and flight logs for the U.S. Air Force’s aging fleet of F-16 fighter jets are analyzed automatically to identify patterns of equipment failures that may affect only a handful of aircraft at present, but have the potential to become widespread. This has enabled officials to confirm and diagnose problems and take corrective action before the problems spread.

With AI, we can have machines look for millions of worrying patterns in the time it would take a human to consider just one. But that capability includes a terrible dilemma: the multiple hypotheses problem. If you sound an alarm whenever something is anomalous at a 99% confidence level, and you check millions of things an hour, then you will receive hundreds of alarms every minute.

Statisticians and AI researchers are working together to identify situations and conditions that tend to sound false alarms, like a truckload of potassium-rich bananas that can set off a radiation detector meant to identify nuclear materials. By reducing the risk of false alarms, it will be possible to set sensor thresholds even lower, enhancing sensitivity.

The predictive benefits of AI will stretch well beyond equipment and process analysis. For instance, researchers are having great success with algorithms that closely monitor subtle facial movements to assess the emotional and psychological states of individuals. Some of the most interesting applications now are in the mental health sphere, but imagine if the same tools could be deployed on checkout lines in stores, lines at theme parks, or security queues at airports. Are your customers happy or agitated? Executives wouldn’t need to wait weeks or even days for a survey to be completed; these systems could tell you the emotional state of your customers right now.

Other researchers are deploying AI in the classroom. When I taught, I couldn’t tell whether the lecture I was giving was any good — at least not when it would still benefit me or my students. But simple sensors like microphones and cameras can be used by AI programs to detect when active learning is taking place. Just the sounds alone — Who’s talking? Who isn’t? Is anyone laughing? — can provide a lot of clues about teaching effectiveness and when adjustments should be made.

Such a tool could also be used to gauge whether your employees are buying in to what you’re sharing with them in a meeting, or if potential customers are engaged during focus groups. A “managerial Siri” might take this even further. If you asked your digital assistant, “Do the folks in my staff meeting seem to be more engaged since we had the retreat?” you might receive an answer such as, “Yes, there is an increase in eye contact between team members and a slight but significant increase in laughter.”

As a manager, I absolutely detest being surprised. And like everyone else, despite the petabytes of data at my fingertips, I too often am. But AI doesn’t get overwhelmed by the size and complexity of information the way we humans do. Thus, its promise to keep managers more in the know about what’s really happening across their enterprise is truly profound.


Andrew W. Moore is dean of the School of Computer Science at Carnegie Mellon University in Pittsburgh, Pennsylvania.

Source: MIT Sloan Management Review, Magazine: Fall 2016 IssueFrontiersOpinion & Analysis, September 12, 2016 Reading Time: 4 min, Andrew W. Moore