Competitive Intelligence: Why You Shouldn’t Be Afraid of Artificial Intelligence by Lili Cheng

Alexander the friendly robot visits the Indoor Park to interact with children by telling classic fairy tales, singing and dancing at Westfield London on August 10, 2016 in London, England.

Alexander the friendly robot visits the Indoor Park to interact with children by telling classic fairy tales, singing and dancing at Westfield London on August 10, 2016 in London, England.
Jeff Spicer—Getty Images

 

Artificial intelligence is one of the hottest, least understood and most debated technological breakthroughs in modern times. In many ways, the magic of AI is that it’s not something you can see or touch. You may not even realize you are using it today. When your Nest thermostat knows how to set the right temperature at home or when your phone automatically corrects your grammar or when a Tesla car navigates a road autonomously–that’s AI at work.

For most of our lives, people have had to adapt to technology. To find a file on a computer, we input a command on a keyboard attached to one particular machine. To make a phone call, we tap an assortment of numbers on a keypad. To get a piece of information, we type a specific set of keywords into a search engine.

AI is turning that dynamic on its head by creating technologies that adapt to us rather than the other way around–new ways of interacting with computers that won’t seem like computing at all.

Computer scientists have been working on AI technologies for decades, and we’re now seeing that work bear fruit. Recent breakthroughs, based on computers’ ability to understand speech and language, and have vision, have given rise to our technology “alter ego”–a personal guide that knows your habits and communication preferences, and helps you schedule your time, motivate your team to do their best work, or be, say, a better parent. Those same achievements have divided leading voices inside the world of technology about the potential pitfalls that may accompany this progress.

Core to the work I do on conversational AI is how we model language–not only inspired by technical advances, but also by insight from our best and brightest thinkers on the way people use words. To do so, we revisit ideas in books, such as Steven Pinker’s The Stuff of Thought, that give us closer looks at the complexity of human language, which combines logical rules with the unpredictability of human passion.

Humanity’s most important moments are often those risky interactions where emotion comes into play–like a date or a business negotiation–and people use vague, ambiguous language to take social risks. AI that understands language needs to combine the logical and unpredictable ways people interact. This likely means AI needs to recognize when people are more effective on their own–when to get out of the way, when not to help, when not to record, when not to interrupt or distract.

The advances that AI is bringing to our world have been a half-century in the making. But AI’s time is now. Because of the vast amounts of data in our world, only the almost limitless computing power of the cloud can make sense of it. AI can truly help solve some of the world’s most vexing problems, from improving day-to-day communication to energy, climate, health care, transportation and more. The real magic of AI, in the end, won’t be magic at all. It will be technology that adapts to people. This will be profoundly transformational for humans and for humanity.

Source: TIME, January 4, 2018 , Cheng is a corporate vice president of Microsoft AI & Research

Advertisements

Competitive Intelligence: Artificial Intelligence for the Real World, by Thomas H. Davenport and Rajeev Ronanki

In 2013, the MD Anderson Cancer Center launched a “moon shot” project: diagnose and recommend treatment plans for certain forms of cancer using IBM’s Watson cognitive system.

But in 2017, the project was put on hold after costs topped $62 million—and the system had yet to be used on patients. At the same time, the cancer center’s IT group was experimenting with using cognitive technologies to do much less ambitious jobs, such as making hotel and restaurant recommendations for patients’ families, determining which patients needed help paying bills, and addressing staff IT problems. The results of these projects have been much more promising: The new systems have contributed to increased patient satisfaction, improved financial performance, and a decline in time spent on tedious data entry by the hospital’s care managers.

Despite the setback on the moon shot, MD Anderson remains committed to using cognitive technology—that is, next-generation artificial intelligence—to enhance cancer treatment, and is currently developing a variety of new projects at its center of competency for cognitive computing.

The contrast between the two approaches is relevant to anyone planning AI initiatives. Our survey of 250 executives who are familiar with their companies’ use of cognitive technology shows that three-quarters of them believe that AI will substantially transform their companies within three years.

However, our study of 152 projects in almost as many companies also reveals that highly ambitious moon shots are less likely to be successful than “low-hanging fruit” projects that enhance business processes.

This shouldn’t be surprising—such has been the case with the great majority of new technologies that companies have adopted in the past. But the hype surrounding artificial intelligence has been especially powerful, and some organizations have been seduced by it.

In this article, we’ll look at the various categories of AI being employed and provide a framework for how companies should begin to build up their cognitive capabilities in the next several years to achieve their business objectives.

Three Types of AI

It is useful for companies to look at AI through the lens of business capabilities rather than technologies. Broadly speaking, AI can support three important business needs: automating business processes, gaining insight through data analysis, and engaging with customers and employees.

Process automation.

Of the 152 projects we studied, the most common type was the automation of digital and physical tasks—typically back-office administrative and financial activities—using robotic process automation technologies.

RPA is more advanced than earlier business-process automation tools, because the “robots” (that is, code on a server) act like a human inputting and consuming information from multiple IT systems. Tasks include:

  • transferring data from e-mail and call center systems into systems of record—for example, updating customer files with address changes or service additions;
  • replacing lost credit or ATM cards, reaching into multiple systems to update records and handle customer communications;
  • reconciling failures to charge for services across billing systems by extracting information from multiple document types; and
  • “reading” legal and contractual documents to extract provisions using natural language processing.

RPA is the least expensive and easiest to implement of the cognitive technologies we’ll discuss here, and typically brings a quick and high return on investment. (It’s also the least “smart” in the sense that these applications aren’t programmed to learn and improve, though developers are slowly adding more intelligence and learning capability.) It is particularly well suited to working across multiple back-end systems.

At NASA, cost pressures led the agency to launch four RPA pilots in accounts payable and receivable, IT spending, and human resources—all managed by a shared services center. The four projects worked well—in the HR application, for example, 86% of transactions were completed without human intervention—and are being rolled out across the organization. NASA is now implementing more RPA bots, some with higher levels of intelligence. As Jim Walker, project leader for the shared services organization notes, “So far it’s not rocket science.”

One might imagine that robotic process automation would quickly put people out of work. But across the 71 RPA projects we reviewed (47% of the total), replacing administrative employees was neither the primary objective nor a common outcome. Only a few projects led to reductions in head count, and in most cases, the tasks in question had already been shifted to outsourced workers. As technology improves, robotic automation projects are likely to lead to some job losses in the future, particularly in the offshore business-process outsourcing industry. If you can outsource a task, you can probably automate it.

Cognitive insight.

The second most common type of project in our study (38% of the total) used algorithms to detect patterns in vast volumes of data and interpret their meaning. Think of it as “analytics on steroids.” These machine-learning applications are being used to:

  • predict what a particular customer is likely to buy;
  • identify credit fraud in real time and detect insurance claims fraud;
  • analyze warranty data to identify safety or quality problems in automobiles and other manufactured products;
  • automate personalized targeting of digital ads; and
  • provide insurers with more-accurate and detailed actuarial modeling.

Cognitive insights provided by machine learning differ from those available from traditional analytics in three ways: They are usually much more data-intensive and detailed, the models typically are trained on some part of the data set, and the models get better—that is, their ability to use new data to make predictions or put things into categories improves over time.

Versions of machine learning (deep learning, in particular, which attempts to mimic the activity in the human brain in order to recognize patterns) can perform feats such as recognizing images and speech.

Machine learning can also make available new data for better analytics. While the activity of data curation has historically been quite labor-intensive, now machine learning can identify probabilistic matches—data that is likely to be associated with the same person or company but that appears in slightly different formats—across databases.

GE has used this technology to integrate supplier data and has saved $80 million in its first year by eliminating redundancies and negotiating contracts that were previously managed at the business unit level.

Similarly, a large bank used this technology to extract data on terms from supplier contracts and match it with invoice numbers, identifying tens of millions of dollars in products and services not supplied.

Deloitte’s audit practice is using cognitive insight to extract terms from contracts, which enables an audit to address a much higher proportion of documents, often 100%, without human auditors’ having to painstakingly read through them.

Cognitive insight applications are typically used to improve performance on jobs only machines can do—tasks such as programmatic ad buying that involve such high-speed data crunching and automation that they’ve long been beyond human ability—so they’re not generally a threat to human jobs.

Cognitive engagement.

Projects that engage employees and customers using natural language processing chatbots, intelligent agents, and machine learning were the least common type in our study (accounting for 16% of the total). This category includes:

  • intelligent agents that offer 24/7 customer service addressing a broad and growing array of issues from password requests to technical support questions—all in the customer’s natural language;
  • internal sites for answering employee questions on topics including IT, employee benefits, and HR policy;
  • product and service recommendation systems for retailers that increase personalization, engagement, and sales—typically including rich language or images; and
  • health treatment recommendation systems that help providers create customized care plans that take into account individual patients’ health status and previous treatments.

The companies in our study tended to use cognitive engagement technologies more to interact with employees than with customers. That may change as firms become more comfortable turning customer interactions over to machines.

Vanguard, for example, is piloting an intelligent agent that helps its customer service staff answer frequently asked questions. The plan is to eventually allow customers to engage with the cognitive agent directly, rather than with the human customer-service agents.

SEBank, in Sweden, and the medical technology giant Becton, Dickinson, in the United States, are using the lifelike intelligent-agent avatar Amelia to serve as an internal employee help desk for IT support. SEBank has recently made Amelia available to customers on a limited basis in order to test its performance and customer response.

R1801H_DAVENPORT_BENEFITS.png

Companies tend to take a conservative approach to customer-facing cognitive engagement technologies largely because of their immaturity. Facebook, for example, found that its Messenger chatbots couldn’t answer 70% of customer requests without human intervention. As a result, Facebook and several other firms are restricting bot-based interfaces to certain topic domains or conversation types.

Our research suggests that cognitive engagement apps are not currently threatening customer service or sales rep jobs. In most of the projects we studied, the goal was not to reduce head count but to handle growing numbers of employee and customer interactions without adding staff.

Some organizations were planning to hand over routine communications to machines, while transitioning customer-support personnel to more-complex activities such as handling customer issues that escalate, conducting extended unstructured dialogues, or reaching out to customers before they call in with problems.

As companies become more familiar with cognitive tools, they are experimenting with projects that combine elements from all three categories to reap the benefits of AI. An Italian insurer, for example, developed a “cognitive help desk” within its IT organization. The system engages with employees using deep-learning technology (part of the cognitive insights category) to search frequently asked questions and answers, previously resolved cases, and documentation to come up with solutions to employees’ problems. It uses a smart-routing capability (business process automation) to forward the most complex problems to human representatives, and it uses natural language processing to support user requests in Italian.

Despite their rapidly expanding experience with cognitive tools, however, companies face significant obstacles in development and implementation. On the basis of our research, we’ve developed a four-step framework for integrating AI technologies that can help companies achieve their objectives, whether the projects are moon shoots or business-process enhancements.

1. Understanding The Technologies

Before embarking on an AI initiative, companies must understand which technologies perform what types of tasks, and the strengths and limitations of each. Rule-based expert systems and robotic process automation, for example, are transparent in how they do their work, but neither is capable of learning and improving.

Deep learning, on the other hand, is great at learning from large volumes of labeled data, but it’s almost impossible to understand how it creates the models it does. This “black box” issue can be problematic in highly regulated industries such as financial services, in which regulators insist on knowing why decisions are made in a certain way.

We encountered several organizations that wasted time and money pursuing the wrong technology for the job at hand. But if they’re armed with a good understanding of the different technologies, companies are better positioned to determine which might best address specific needs, which vendors to work with, and how quickly a system can be implemented. Acquiring this understanding requires ongoing research and education, usually within IT or an innovation group.

R1801H_DAVENPORT_CHALLENGES.png

In particular, companies will need to leverage the capabilities of key employees, such as data scientists, who have the statistical and big-data skills necessary to learn the nuts and bolts of these technologies. A main success factor is your people’s willingness to learn. Some will leap at the opportunity, while others will want to stick with tools they’re familiar with. Strive to have a high percentage of the former.

If you don’t have data science or analytics capabilities in-house, you’ll probably have to build an ecosystem of external service providers in the near term. If you expect to be implementing longer-term AI projects, you will want to recruit expert in-house talent. Either way, having the right capabilities is essential to progress.

Given the scarcity of cognitive technology talent, most organizations should establish a pool of resources—perhaps in a centralized function such as IT or strategy—and make experts available to high-priority projects throughout the organization. As needs and talent proliferate, it may make sense to dedicate groups to particular business functions or units, but even then a central coordinating function can be useful in managing projects and careers.

2. Creating a Portfolio of Projects

The next step in launching an AI program is to systematically evaluate needs and capabilities and then develop a prioritized portfolio of projects. In the companies we studied, this was usually done in workshops or through small consulting engagements. We recommend that companies conduct assessments in three broad areas.

Identifying the opportunities.

The first assessment determines which areas of the business could benefit most from cognitive applications. Typically, they are parts of the company where “knowledge”—insight derived from data analysis or a collection of texts—is at a premium but for some reason is not available.

  • Bottlenecks. In some cases, the lack of cognitive insights is caused by a bottleneck in the flow of information; knowledge exists in the organization, but it is not optimally distributed. That’s often the case in health care, for example, where knowledge tends to be siloed within practices, departments, or academic medical centers.
  • Scaling challenges. In other cases, knowledge exists, but the process for using it takes too long or is expensive to scale. Such is often the case with knowledge developed by financial advisers. That’s why many investment and wealth management firms now offer AI-supported “robo-advice” capabilities that provide clients with cost-effective guidance for routine financial issues.
  • In the pharmaceutical industry, Pfizer is tackling the scaling problem by using IBM’s Watson to accelerate the laborious process of drug-discovery research in immuno-oncology, an emerging approach to cancer treatment that uses the body’s immune system to help fight cancer. Immuno-oncology drugs can take up to 12 years to bring to market. By combining a sweeping literature review with Pfizer’s own data, such as lab reports, Watson is helping researchers to surface relationships and find hidden patterns that should speed the identification of new drug targets, combination therapies for study, and patient selection strategies for this new class of drugs.
  • Inadequate firepower. Finally, a company may collect more data than its existing human or computer firepower can adequately analyze and apply. For example, a company may have massive amounts of data on consumers’ digital behavior but lack insight about what it means or how it can be strategically applied. To address this, companies are using machine learning to support tasks such as programmatic buying of personalized digital ads or, in the case of Cisco Systems and IBM, to create tens of thousands of “propensity models” for determining which customers are likely to buy which products.

Determining the use cases.

The second area of assessment evaluates the use cases in which cognitive applications would generate substantial value and contribute to business success. Start by asking key questions such as: How critical to your overall strategy is addressing the targeted problem? How difficult would it be to implement the proposed AI solution—both technically and organizationally? Would the benefits from launching the application be worth the effort? Next, prioritize the use cases according to which offer the most short- and long-term value, and which might ultimately be integrated into a broader platform or suite of cognitive capabilities to create competitive advantage.

Selecting the technology.

The third area to assess examines whether the AI tools being considered for each use case are truly up to the task. Chatbots and intelligent agents, for example, may frustrate some companies because most of them can’t yet match human problem solving beyond simple scripted cases (though they are improving rapidly). Other technologies, like robotic process automation that can streamline simple processes such as invoicing, may in fact slow down more-complex production systems. And while deep learning visual recognition systems can recognize images in photos and videos, they require lots of labeled data and may be unable to make sense of a complex visual field.

In time, cognitive technologies will transform how companies do business. Today, however, it’s wiser to take incremental steps with the currently available technology while planning for transformational change in the not-too-distant future. You may ultimately want to turn customer interactions over to bots, for example, but for now it’s probably more feasible—and sensible—to automate your internal IT help desk as a step toward the ultimate goal.

3. Launching Pilots

Because the gap between current and desired AI capabilities is not always obvious, companies should create pilot projects for cognitive applications before rolling them out across the entire enterprise.

Proof-of-concept pilots are particularly suited to initiatives that have high potential business value or allow the organization to test different technologies at the same time. Take special care to avoid “injections” of projects by senior executives who have been influenced by technology vendors. Just because executives and boards of directors may feel pressure to “do something cognitive” doesn’t mean you should bypass the rigorous piloting process. Injected projects often fail, which can significantly set back the organization’s AI program.

If your firm plans to launch several pilots, consider creating a cognitive center of excellence or similar structure to manage them. This approach helps build the needed technology skills and capabilities within the organization, while also helping to move small pilots into broader applications that will have a greater impact. Pfizer has more than 60 projects across the company that employ some form of cognitive technology; many are pilots, and some are now in production.

At Becton, Dickinson, a “global automation” function within the IT organization oversees a number of cognitive technology pilots that use intelligent digital agents and RPA (some work is done in partnership with the company’s Global Shared Services organization). The global automation group uses end-to-end process maps to guide implementation and identify automation opportunities. The group also uses graphical “heat maps” that indicate the organizational activities most amenable to AI interventions. The company has successfully implemented intelligent agents in IT support processes, but as yet is not ready to support large-scale enterprise processes, like order-to-cash. The health insurer Anthem has developed a similar centralized AI function that it calls the Cognitive Capability Office.

Business-process redesign.

As cognitive technology projects are developed, think through how workflows might be redesigned, focusing specifically on the division of labor between humans and the AI. In some cognitive projects, 80% of decisions will be made by machines and 20% will be made by humans; others will have the opposite ratio. Systematic redesign of workflows is necessary to ensure that humans and machines augment each other’s strengths and compensate for weaknesses.

The investment firm Vanguard, for example, has a new “Personal Advisor Services” (PAS) offering, which combines automated investment advice with guidance from human advisers. In the new system, cognitive technology is used to perform many of the traditional tasks of investment advising, including constructing a customized portfolio, rebalancing portfolios over time, tax loss harvesting, and tax-efficient investment selection.

Vanguard’s human advisers serve as “investing coaches,” tasked with answering investor questions, encouraging healthy financial behaviors, and being, in Vanguard’s words, “emotional circuit breakers” to keep investors on plan. Advisers are encouraged to learn about behavioral finance to perform these roles effectively. The PAS approach has quickly gathered more than $80 billion in assets under management, costs are lower than those for purely human-based advising, and customer satisfaction is high.

Vanguard understood the importance of work redesign when implementing PAS, but many companies simply “pave the cow path” by automating existing work processes, particularly when using RPA technology. By automating established workflows, companies can quickly implement projects and achieve ROI—but they forgo the opportunity to take full advantage of AI capabilities and substantively improve the process.

Cognitive work redesign efforts often benefit from applying design-thinking principles: understanding customer or end-user needs, involving employees whose work will be restructured, treating designs as experimental “first drafts,” considering multiple alternatives, and explicitly considering cognitive technology capabilities in the design process. Most cognitive projects are also suited to iterative, agile approaches to development.

4. Scaling Up

Many organizations have successfully launched cognitive pilots, but they haven’t had as much success rolling them out organization-wide. To achieve their goals, companies need detailed plans for scaling up, which requires collaboration between technology experts and owners of the business process being automated. Because cognitive technologies typically support individual tasks rather than entire processes, scale-up almost always requires integration with existing systems and processes. Indeed, in our survey, executives reported that such integration was the greatest challenge they faced in AI initiatives.

Companies should begin the scaling-up process by considering whether the required integration is even possible or feasible. If the application depends on special technology that is difficult to source, for example, that will limit scale-up. Make sure your business process owners discuss scaling considerations with the IT organization before or during the pilot phase: An end run around IT is unlikely to be successful, even for relatively simple technologies like RPA.

The health insurer Anthem, for example, is taking on the development of cognitive technologies as part of a major modernization of its existing systems. Rather than bolting new cognitive apps onto legacy technology, Anthem is using a holistic approach that maximizes the value being generated by the cognitive applications, reduces the overall cost of development and integration, and creates a halo effect on legacy systems. The company is also redesigning processes at the same time to, as CIO Tom Miller puts it, “use cognitive to move us to the next level.”

In scaling up, companies may face substantial change-management challenges. At one U.S. apparel retail chain, for example, the pilot project at a small subset of stores used machine learning for online product recommendations, predictions for optimal inventory and rapid replenishment models, and—most difficult of all—merchandising. Buyers, used to ordering product on the basis of their intuition, felt threatened and made comments like “If you’re going to trust this, what do you need me for?” After the pilot, the buyers went as a group to the chief merchandising officer and requested that the program be killed. The executive pointed out that the results were positive and warranted expanding the project. He assured the buyers that, freed of certain merchandising tasks, they could take on more high-value work that humans can still do better than machines, such as understanding younger customers’ desires and determining apparel manufacturers’ future plans. At the same time, he acknowledged that the merchandisers needed to be educated about a new way of working.

If scale-up is to achieve the desired results, firms must also focus on improving productivity. Many, for example, plan to grow their way into productivity—adding customers and transactions without adding staff. Companies that cite head count reduction as the primary justification for the AI investment should ideally plan to realize that goal over time through attrition or from the elimination of outsourcing.

The Future Cognitive Company

Our survey and interviews suggest that managers experienced with cognitive technology are bullish on its prospects. Although the early successes are relatively modest, we anticipate that these technologies will eventually transform work. We believe that companies that are adopting AI in moderation now—and have aggressive implementation plans for the future—will find themselves as well positioned to reap benefits as those that embraced analytics early on.

Through the application of AI, information-intensive domains such as marketing, health care, financial services, education, and professional services could become simultaneously more valuable and less expensive to society. Business drudgery in every industry and function—overseeing routine transactions, repeatedly answering the same questions, and extracting data from endless documents—could become the province of machines, freeing up human workers to be more productive and creative. Cognitive technologies are also a catalyst for making other data-intensive technologies succeed, including autonomous vehicles, the Internet of Things, and mobile and multichannel consumer technologies.

The great fear about cognitive technologies is that they will put masses of people out of work. Of course, some job loss is likely as smart machines take over certain tasks traditionally done by humans. However, we believe that most workers have little to fear at this point. Cognitive systems perform tasks, not entire jobs. The human job losses we’ve seen were primarily due to attrition of workers who were not replaced or through automation of outsourced work. Most cognitive tasks currently being performed augment human activity, perform a narrow task within a much broader job, or do work that wasn’t done by humans in the first place, such as big-data analytics.

Most managers with whom we discuss the issue of job loss are committed to an augmentation strategy—that is, integrating human and machine work, rather than replacing humans entirely. In our survey, only 22% of executives indicated that they considered reducing head count as a primary benefit of AI.

We believe that every large company should be exploring cognitive technologies. There will be some bumps in the road, and there is no room for complacency on issues of workforce displacement and the ethics of smart machines. But with the right planning and development, cognitive technology could usher in a golden age of productivity, work satisfaction, and prosperity.

A version of this article appeared in the January–February 2018 issue (pp.108–116) of Harvard Business Review.

Thomas H. Davenport is the President’s Distinguished Professor in Management and Information Technology at Babson College, a research fellow at the MIT Initiative on the Digital Economy, and a senior adviser at Deloitte Analytics. Author of over a dozen management books, his latest is Only Humans Need Apply: Winners and Losers in the Age of Smart Machines


Rajeev Ronanki is a principal at Deloitte Consulting, where he leads the cognitive computing and health care innovation practices. Some of the companies mentioned in this article are Deloitte clients.

Source: Harvard Business Review, FROM THE JANUARY–FEBRUARY 2018 ISSUE. 

Why a Data and Analytics Strategy Today Gives Marketers an Advantage Tomorrow by Matt Lawson and Shuba Srinivasan

The content on this page was commissioned by our sponsor, Google Analytics 360 Suite. The MIT SMR editorial staff was not involved in the selection, writing, or editing of the content on this page.

A Google perspective by Matt Lawson, Director, Performance Ads Marketing, Google

Intelligent and mobile technologies have dramatically altered the customer journey, all but erasing the line between digital and offline consumer behavior. Whether tapping on their smartphone, speaking to a home device, or interacting with their car’s connectivity systems, consumers continuously engage with a wide array of digital systems when they want to learn something, do something, or buy something. It doesn’t matter if it’s a high-end purchase like a large-screen TV or a more mundane sundry like deodorant—it’s becoming second nature for consumers to steal moments throughout the day to answer any question that comes to mind, using an ever-increasing variety of devices.

The more personalized and relevant the results, the more meaningful these engagements become. Therein lies the new marketing opportunity for brands: building a data strategy to collect and analyze the trails of data consumers create, uncover insights, and take action to boost the value of these engagements.

The expanding array of channels and devices makes it more challenging—but also more important than ever—for marketers to gain a complete understanding of their audience and generate actionable insights. Brands that build a foundation of data and analytics and use advanced technology to deliver engaging experiences throughout the customer journey will have the advantage going forward.

Download “Why a Data and Analytics Strategy Today Gives Marketers an Advantage Tomorrow

Optimizing the Data Opportunity

Marketers of all stripes are well aware of the vast opportunity. Indeed, nearly 90 percent of senior business-to-consumer (B2C) marketing executives surveyed for a June 2017 study conducted by Econsultancy in partnership with Google said that understanding user journeys across channels and devices is critical to marketing success.1 However, the gap is widening between “mainstream” marketers and those who have reoriented their marketing and advertising approaches to reflect the need for more refined, integrated data strategies, according to the study.

The study found that leaders (defined as those whose marketing results significantly exceeded their business goals in 2016) are 50 percent more likely to have a clear understanding of customer journeys across channels and devices. They are also twice as likely to routinely take action based on analytical insights. Rather than focusing on clicks and conversions, these organizations are working to create a more complete understanding of customers across devices and channels—and capturing the long-term value of those relationships.

Tying together the many aspects of the customer journey requires breaking down the organizational silos that have developed in marketing and advertising over the years. No longer can marketers operate as a collection of disparate teams focused on search media, TV buying, performance marketing, brand and store buying. These groups need to become integrated, collaborative teams that are aligned behind common goals. They must work together to create a holistic view of customer behavior and brand performance that can be shared by all. In fact, 86 percent of all respondents in the Econsultancy study said eliminating silos is critical to expanding the use of data and analytics in decision-making.

Integrating the Technology Stack

Leading marketers are also updating their technology systems to support increased organizational alignment and revenue growth, according to the study. In addition, they are 52 percent more likely than the mainstream to have fully integrated the marketing and advertising technology stack. 

Matt Lawson
Shuba Srinivasan

Left: Matt Lawson, Director, Performance Ads Marketing, Google; Right: Shuba Srinivasan, Adele and Norman Barron Professor of Management, Boston University Questrom School of Business

 

A unified marketing and advertising technology stack enables companies to not only identify valuable customer segments but also deliver customized experiences to them. Marketers with fully integrated solutions are 45 percent more likely to use audience-level data to personalize the customer experience, according to the study, and 60 percent more likely to optimize experiences in real time, using analytics. The Econsultancy research also shows they are able to accurately attribute business value to their marketing and advertising efforts to evaluate how channels work together and better allocate their investments.

Getting there takes time, commitment and continual refinement. For example, Google has “store visits” technology that can inform brands when a consumer clicks an ad and then enters a store. Retailers can extrapolate what this data means for their business performance by, for example, running “hold-out tests”: investing heavily in media in some markets while going dark in others, to gauge the incremental impact on results such as revenue growth or average order volumes. Based on that, they can arrive at a proxy business value for store visits and test it over time as assumptions change.

A Digital Foundation for the Future

The following key elements can help organizations foster a fully integrated, data-driven marketing function.

Executive buy-in. Unequivocal support from the top levels of the business are vital for a customer-centric, data-driven transformation to succeed.

Data-savvy marketers. The marketing function needs people with not only data skills but also an understanding of the potential to optimize real-time customer interactions based on data insights. Ideal marketing professionals are proficient in the application of analytics, naturally curious, and inclined toward ongoing optimization.

Cross-functional collaboration. Teams that work together will establish new metrics and build new benchmarks that will deliver the insight into how media impacts business goals and drives real-time decision-making.

A learning culture. Marketing transformation takes time and experimentation. Organizations need to commit to ongoing testing to deliver better experiences that drive business growth.

Technology investment. It’s crucial to consolidate data to not only visualize the customer as he or she moves across channels but also connect those insights back to enterprise data, analyze and segment it, and apply those insights to meaningful and profitable actions. Modern technology is increasingly either a growth accelerator or a business inhibitor.

For organizations that make the investment in a holistic data and analytics approach, the role of marketing in the business will expand. Rather than functioning solely as an acquisition vehicle, marketing will become an engine of growth, driving effective upselling, cross-selling and customer retention. When marketing leaders develop a data and analytics strategy to better understand customer journeys, invest in a unified technology platform, and collaborate on achieving shared business goals, they can offer tremendous value in the form of actionable insights that will benefit not only customers but also the business. 

Scholar Perspective

Messages and Metrics That Match the Customer Journey

A scholar perspective by Shuba Srinivasan, Adele and Norman Barron Professor of Management at Boston University Questrom School of Business

Increasing demands for accountability have created a sense of urgency for marketers to determine the most effective metrics for both driving growth and demonstrating marketing’s value. But mobility has complicated those efforts. We live in an always-on world. That’s an enormous challenge for marketing organizations, but one with a huge upside if they can turn data into insight.

On the one hand, multichannel access to searching and shopping results in conversion friction—a straightforward, sequential customer journey no longer exists. On the other hand, consumers can search and shop 24/7/365, generating enormous amounts of customer data that can empower marketers to better serve their customers. According to a study published in the Marketing Science journal, multichannel shoppers are more profitable than single-channel customers. This fortifies the case for marketers to master their messages and metrics for the omnichannel world. Smart brands are trying to figure out how to target the right customer with the right message at the right time.

An enormous untapped opportunity exists in mobile ad platforms. Mary Meeker’s most recent Internet Trends report notes that U.S. consumers spend 28 percent of their time on mobile devices, but that mobile accounts for just 21 percent of advertising dollars. That leaves a $16 billion opportunity sitting on the table.

More importantly, however, consumers are embracing mobile as a complement to, not a replacement for, other channels. We’ve found that a multichannel marketing approach compounds the financial impact of the marketing spend. When brands simultaneously invest across channels, they see a significant increase in results—one plus one equals much more than two. This points to the need for better coordination among the often independent entities that handle paid search, online display and TV/print/radio advertising.

More from Google

Built for the enterprise, the Google Analytics 360 Suite is a powerful marketing analytics solution. It helps you get a handle on all your marketing data and find insights you can use to improve customer experiences.

Metrics That Matter

The best way to engage with customers depends on where they are in their decision-making process. We recently studied the financial impact of online display ads and paid search, analyzing the results of more than 1,600 companies over a five-year period. Online display ads, which are typically shown to consumers in the awareness phase of the customer journey, are initiated by the brand and cast a wider net. Paid search, meanwhile, is typically delivered to consumers in the consideration and purchase phases, is initiated by the consumer and addresses a narrower audience. Based on our research, we’ve found that online display and paid search advertising each exhibit significantly positive effects on business performance and firm value. We also found that online display advertising has distinct long-term value, while the differential benefit of paid search accrues in the short term.

The marketing metrics that matter vary throughout the customer journey, too. In the early stages, important metrics might include the length of time a customer spends in the store or how often they visit the website. At the purchase stage, it would make more sense to measure revenue per user, conversion rates and acquisition costs. Post-purchase, metrics such as retention, lifetime customer value and loyalty serve as proxies for future financial impact.

Gearing Up for a New Approach

Determining how to best engage with customers and measure the effectiveness of those efforts is something each marketing organization must determine for itself through experimentation. This requires new skills, mindsets and processes, including:

Analytics expertise. Today’s marketing organizations require professionals with the quantitative expertise to analyze data and account for causality, along with the business and domain knowledge to put all the pieces together.

Willingness to learn. Figuring out the right approaches and metrics demands a willingness to experiment and conduct A/B testing to determine what works. A commitment must also come from the very top for data-driven marketing decision-making.

Customer empathy. Although quantitative skills are critical, so is understanding of the customer. Without a clear idea of a customer’s goals and motivations throughout the customer journey, all the numbers are meaningless.

Mobile and omnichannel customer behaviors are here to stay—and it’s vital for marketing organizations to adapt their own behaviors to maximize success.

About the Authors: Matt Lawson is the director of performance ads marketing at Google. Shuba Srinivasan is the Adele and Norman Barron Professor of Management at Boston University Questrom School of Business.

MIT SMR Custom Studio

MIT SMR Custom Studio is an independent content creation unit within MIT Sloan Management Review. The MIT SMR editorial staff was not involved in the selection, writing, or editing of the content on this page. Learn More.

References (1)

1. Econsultancy/Google. “Customer Experience Is Written in Data,” May 2017. We surveyed 677 marketing and measurement executives at companies with over $250 million in revenues, primarily in North America. Total respondents included 199 “leading marketers,” who reported that marketing significantly exceeded top business goals in 2016, and 478 “mainstream marketers.” https://goo.gl/uxd9mx 

Competitive Intelligence: Travis Kalanick steps down as chief executive of Uber

“WE HAVE a lot of attention as it is. I don’t even know how we could get more,” Travis Kalanick, the boss of Uber, said last year. The ride-hailing giant found a way. Mr Kalanick failed to manage the fallout from a series of high-profile blunders and scandals.
On June 20th he resigned as chief executive officer of the firm he co-founded in 2009.Uber is facing several crises, including senior executive departures, a lawsuit over alleged intellectual-property theft, claims about sexual harassment and a federal probe into its use of potentially illegal software to track regulators.
Mr Kalanick had previously said he would take a leave of absence, in part to deal with a personal tragedy—the death of his mother in a boating accident. That was not enough for investors in Uber, who asked him to make his leave permanent.Uber will not change overnight. Mr Kalanick trained it to be unrelentingly competitive, aggressive and ready to break rules. That culture helped make it the most prominent private American technology firm, with a valuation of nearly $70bn. But the impact of Mr Kalanick’s self-styled “always be hustlin’ ” approach has been stark. Uber’s controversies have dented its brand, hurt its ability to recruit the best engineers and cost it customers in America, who are defecting to its rival, Lyft.The identity of Mr Kalanick’s replacement will be crucial. Uber’s board will seek an experienced boss, perhaps a woman. He or she will need experience running a multinational. Whether the board should hire someone with a background in transport (perhaps from an airline or logistics firm) or a candidate from the technology industry is unclear. Some have suggested that Sheryl Sandberg, who serves as number two at Facebook, would be a good choice, but she may not be willing to jump.Investors in Uber have accepted that Mr Kalanick will stay on the company’s board (along with his co-founder and another early executive, he controls the majority of super-voting shares) so he is likely to have a strong influence on the firm. He will need to exercise restraint. Twitter, an internet company that is struggling to attract more users, found it hard to settle on a clear strategy in part because several co-founders who once ran it continued to serve on the board and second-guessed the boss.

Mr Kalanick’s departure should be enough to placate some alienated customers. Regulators may treat Uber more kindly, too. Abroad, its scandals have barely registered. In the first quarter of this year it notched up record revenues, of $3.4bn. Its losses, of around $700m, are still high but diminishing. The next chief executive will need to decide whether to chase growth and endure continued steep losses, or cut back on international expansion in order to make more money. After watching Mr Kalanick push the pedal to the metal, Uber’s investors may hope that a more conservative era—in terms of finances as well as culture—is about to begin.

This article appeared in the Business section of the print edition under the headline “Gear change”

Competitive Intelligence: “Basic economy” class is winning over flyers

GULLIVER wrote last week about American Airlines handing indignant flyers a notable victory. The carrier rescinded a plan to take away an inch of legroom from economy-class seats on new planes, following a public outcry. Such concessions are rare. Airlines generally worry about how customers vote with their wallets not how they grumble with their words. Hence, they cut comforts to offer the low fares that people demand.

Anyone hoping that American Airlines’ climbdown might signal a reversal of that trend should think again. Earlier this year, United Airlines introduced a new class of fare, “basic economy”. Such tickets, which strip out those few remaining comforts that economy passengers enjoy, have been derided as “last class”. But, like it or not, cost-conscious passengers are showing their approval.

The airline expanded the programme to all domestic markets last month. Andrew Levy, United’s CFO, said last week that about 30-40% percent of economy-class passengers have chosen basic-economy fares since they were introduced. These fares tend to be $15 to $20 cheaper for a one-way flight than regular economy, but they come with significant drawbacks. Flyers cannot take a carry-on bag on board (just a personal item), select their seats in advance, or be eligible for certain upgrades. And the fares are not even lower than they were before. As the airline explained to Gulliver earlier this year, basic economy tickets cost the same as standard economy ones used to. It is the latter that have been made more expensive.

So there is good reason not to love last class. Yet flyers are choosing it in their hordes. It is an inspired move by United and the other big American carriers, many of which have adopted similar programmes. Passengers who fly basic economy are often confused or surprised by the restrictions and end up paying extra to carry on bags. Often those fees are larger than the difference between the fares for standard and basic economy. Those who fly standard economy are now paying more for the privilege.

The profits for United will be mighty. Delta, which was the first to introduce basic economy, earned an additional $20m from it in the first three months of 2016. And that was when it was available on only 8% of the airline’s routes.

For travellers, there is less to be excited about. On social media, some flyers have expressed confusion and frustration over the restrictions (although, to be fair to the airlines, they are hardly hidden). But one angry flyer might have inadvertently put it best when tweeting:

How many times does @United have to abuse me before I finally leave it? #BasicEconomy, with its no carry-on policy, is such a sleazy move.

— Miss GoWhitely (@missgowhitely) June 3, 2017

The answer appears to be “many”.

Source: GULLIVER, The Economist, Jun 19th 2017, by

Competitive Intelligence: What to Expect From Artificial Intelligence by Ajay Agrawal, Joshua S. Gans, and Avi Goldfarb

To understand how advances in artificial intelligence are likely to change the workplace — and the work of managers — you need to know where AI delivers the most value.

Major technology companies such as Apple, Google, and Amazon are prominently featuring artificial intelligence (AI) in their product launches and acquiring AI-based startups. The flurry of interest in AI is triggering a variety of reactions — everything from excitement about how the capabilities will augment human labor to trepidation about how they will eliminate jobs. In our view, the best way to assess the impact of radical technological change is to ask a fundamental question: How does the technology reduce costs? Only then can we really figure out how things might change.

To appreciate how useful this framing can be, let’s review the rise of computer technology through the same lens. Moore’s law, the long-held view that the number of transistors on an integrated circuit doubles approximately every two years, dominated information technology until just a few years ago. What did the semiconductor revolution reduce the cost of? In a word: arithmetic.

This answer may seem surprising since computers have become so widespread. We use them to communicate, play games and music, design buildings, and even produce art. But deep down, computers are souped-up calculators. That they appear to do more is testament to the power of arithmetic. The link between computers and arithmetic was clear in the early days, when computers were primarily used for censuses and various military applications. Before semiconductors, “computers” were humans who were employed to do arithmetic problems. Digital computers made arithmetic inexpensive, which eventually resulted in thousands of new applications for everything from data storage to word processing to photography.

AI presents a similar opportunity: to make something that has been comparatively expensive abundant and cheap. The task that AI makes abundant and inexpensive is prediction — in other words, the ability to take information you have and generate information you didn’t previously have. In this article, we will demonstrate how improvement in AI is linked to advances in prediction. We will explore how AI can help us solve problems that were not previously prediction oriented, how the value of some human skills will rise while others fall, and what the implications are for managers. Our speculations are informed by how technological change has affected the cost of previous tasks, allowing us to anticipate how AI may affect what workers and managers do.

Machine Learning and Prediction

The recent advances in AI come under the rubric of what’s known as “machine learning,” which involves programming computers to learn from example data or past experience. Consider, for example, what it takes to identify objects in a basket of groceries. If we could describe how an apple looks, then we could program a computer to recognize apples based on their color and shape. However, there are other objects that are apple-like in both color and shape. We could continue encoding our knowledge of apples in finer detail, but in the real world, the amount of complexity increases exponentially.

Environments with a high degree of complexity are where machine learning is most useful. In one type of training, the machine is shown a set of pictures with names attached. It is then shown millions of pictures that each contain named objects, only some of which are apples. As a result, the machine notices correlations — for example, apples are often red. Using correlates such as color, shape, texture, and, most important, context, the machine references information from past images of apples to predict whether an unidentified new image it’s viewing contains an apple.

When we talk about prediction, we usually mean anticipating what will happen in the future. For example, machine learning can be used to predict whether a bank customer will default on a loan. But we can also apply it to the present by, for instance, using symptoms to develop a medical diagnosis (in effect, predicting the presence of a disease). Using data this way is not new. The mathematical ideas behind machine learning are decades old. Many of the algorithms are even older. So what has changed?

Recent advances in computational speed, data storage, data retrieval, sensors, and algorithms have combined to dramatically reduce the cost of machine learning-based predictions. And the results can be seen in the speed of image recognition and language translation, which have gone from clunky to nearly perfect. All this progress has resulted in a dramatic decrease in the cost of prediction.

The Value of Prediction

So how will improvements in machine learning impact what happens in the workplace? How will they affect one’s ability to complete a task, which might be anything from driving a car to establishing the price for a new product? Once actions are taken, they generate outcomes. (See “The Anatomy of a Task.”) But actions don’t occur in a vacuum. Rather, they are shaped by underlying conditions. For example, a driver’s decision to turn right or left is influenced by predictions about what other drivers will do and what the best course of action may be in light of those predictions.

The Anatomy of a Task

Actions are shaped by the underlying conditions and the resolution of uncertainty to lead to outcomes. Drivers, for example, need to observe the immediate environment and make adjustments to minimize the risk of accidents and avoid bottlenecks. In doing so, they use judgment in combination with prediction.

Seen in this way, it’s useful to distinguish between the cost versus the value of prediction. As we have noted, advances in AI have reduced the cost of prediction. Just as important is what has happened to the value. Prediction becomes more valuable when data is more widely available and more accessible. The computer revolution has enabled huge increases in both the amount and variety of data. As data availability expands, prediction becomes increasingly possible in a wider variety of tasks.

Autonomous driving offers a good example. The technology required for a car to accelerate, turn, and brake without a driver is decades old. Engineers initially focused on directing the car with what computer scientists call “if then else” algorithms, such as “If an object is in front of the car, then brake.” But progress was slow; there were too many possibilities to codify everything. Then, in the early 2000s, several research groups pursued a useful insight: A vehicle could drive autonomously by predicting what a human driver would do in response to a set of inputs (inputs that, in the vehicle’s case, could come from camera images, information using the laser-based measurement method known as LIDAR, and mapping data). The recognition that autonomous driving was a prediction problem solvable with machine learning meant that autonomous vehicles could start to become a reality in the marketplace years earlier than had been anticipated.

Who Judges?

Judgment is the ability to make considered decisions — to understand the impact different actions will have on outcomes in light of predictions. Tasks where the desired outcome can be easily described and there is limited need for human judgment are generally easier to automate. For other tasks, describing a precise outcome can be more difficult, particularly when the desired outcome resides in the minds of humans and cannot be translated into something a machine can understand.

This is not to say that our understanding of human judgment won’t improve and therefore become subject to automation. New modes of machine learning may find ways to examine the relationships between actions and outcomes, and then use the information to improve predictions. We saw an example of this in 2016, when AlphaGo, Google’s DeepMind artificial intelligence program, succeeded in beating one of the top players in the world in the game of Go. AlphaGo honed its capability by analyzing thousands of human-to-human Go games and playing against itself millions of times. It then incorporated the feedback on actions and outcomes to develop more accurate predictions and new strategies.

Examples of machine learning are beginning to appear more in everyday contexts. For instance, x.ai, a New York City-based artificial intelligence startup, provides a virtual personal assistant for scheduling appointments over email and managing calendars. To train the virtual assistants, development team members had the virtual assistants study the email interactions between people as they schedule meetings with one another so that the technology could learn to anticipate the human responses and see the choices humans make. Although this training didn’t produce a formal catalog of outcomes, the idea is to help virtual assistants mimic human judgment so that over time, the feedback can turn some aspects of judgment into prediction problems.

By breaking down tasks into their constituent components, we can begin to see ways AI will affect the workplace. Although the discussion about AI is usually framed in terms of machines versus humans, we see it more in terms of understanding the level of judgment necessary to pursue actions. In cases where whole decisions can be clearly defined with an algorithm (for example, image recognition and autonomous driving), we expect to see computers replace humans. This will take longer in areas where judgment can’t be easily described, although as the cost of prediction falls, the number of such tasks will decline.

Employing Prediction Machines

Major advances in prediction may facilitate the automation of entire tasks. This will require machines that can both generate reliable predictions and rely on those predictions to determine what to do next. For example, for many business-related language translation tasks, the role of human judgment will become limited as prediction-driven translation improves (though judgment might still be important when translations are part of complex negotiations). However, in other contexts, cheaper and more readily available predictions could lead to increased value for human-led judgment tasks. For instance, Google’s Inbox by Gmail can process incoming email messages and propose several short responses, but it asks the human judge which automated response is the most appropriate. Selecting from a list of choices is faster than typing a reply, enabling the user to respond to more emails in less time.

Medicine is an area where AI will likely play a larger role — but humans will still have an important role, too. Although artificial intelligence can improve diagnosis, which is likely to lead to more effective treatments and better patient care, treatment and care will still rely on human judgment. Different patients have different needs, which humans are better able to respond to than machines. There are many situations where machines may never be able to weigh the relevant pros and cons of doing things one way as opposed to another way in a manner that is acceptable to humans.

The Managerial Challenge

As artificial intelligence technology improves, predictions by machines will increasingly take the place of predictions by humans. As this scenario unfolds, what roles will humans play that emphasize their strengths in judgment while recognizing their limitations in prediction? Preparing for such a future requires considering three interrelated insights:

1. Prediction is not the same as automation. Prediction is an input in automation, but successful automation requires a variety of other activities. Tasks are made up of data, prediction, judgment, and action. Machine learning involves just one component: prediction. Automation also requires that machines be involved with data collection, judgment, and action. For example, autonomous driving involves vision (data); scenarios — given sensory inputs, what action would a human take? (prediction); assessment of consequences (judgment); and acceleration, braking, and steering (action). Medical care can involve information about the patient’s condition (data); diagnostics (prediction); treatment choices (judgment); bedside manner (judgment and action); and physical intervention (action). Prediction is the aspect of automation in which the technology is currently improving especially rapidly, although sensor technology (data) and robotics (action) are also advancing quickly.

2. The most valuable workforce skills involve judgment. In many work activities, prediction has been the bottleneck to automation. In some activities, such as driving, this bottleneck has meant that human workers have remained involved in prediction tasks. Going forward, such human involvement is all but certain to diminish. Instead, employers will want workers to augment the value of prediction; the future’s most valuable skills will be those that are complementary to prediction — in other words, those related to judgment. Consider this analogy: The demand for golf balls rises if the price of golf clubs falls, because golf clubs and golf balls are what economists call complementary goods. Similarly, judgment skills are complementary to prediction and will be in greater demand if the price of prediction falls due to advances in AI. For now, we can only speculate on which aspects of judgment are apt to be most vital: ethical judgment, emotional intelligence, artistic taste, the ability to define tasks well, or some other forms of judgment. However, it seems likely that organizations will have continuing demand for people who can make responsible decisions (requiring ethical judgment), engage customers and employees (requiring emotional intelligence), and identify new opportunities (requiring creativity).

Judgment-related skills will be increasingly valuable in a variety of settings. For example, if prediction leads to cheaper, faster, and earlier diagnosis of diseases, nursing skills related to physical intervention and emotional comfort may become more important. Similarly, as AI becomes better at predicting shopping behavior, skilled human greeters at stores may help differentiate retailers from their competitors. And as AI becomes better at anticipating crimes, private security guards who combine ethical judgment with policing skills may be in greater demand. The part of a task that requires human judgment may change over time, as AI learns to predict human judgment in a particular context. Thus, the judgment aspect of a task will be a moving target, requiring humans to adapt to new situations where judgment is required.

3. Managing may require a new set of talents and expertise. Today, many managerial tasks are predictive. Hiring and promoting decisions, for example, are predicated on prediction: Which job applicant is most likely to succeed in a particular role? As machines become better at prediction, managers’ prediction skills will become less valuable while their judgment skills (which include the ability to mentor, provide emotional support, and maintain ethical standards) become more valuable.

Increasingly, the role of the manager will involve determining how best to apply artificial intelligence, by asking questions such as: What are the opportunities for prediction? What should be predicted? How should the AI agent learn in order to improve predictions over time? Managing in this context will require judgment both in identifying and applying the most useful predictions, and in being able to weigh the relative costs of different types of errors. Sometimes there will be well-acknowledged objectives (for example, identifying people from their faces). Other times, the objective will be less clear and therefore require judgment to specify the desired outcome. In such cases, managers’ judgment will become a particularly valuable complement to prediction technology.

Looking Ahead

At the dawn of the 21st century, the most common prediction problems in business were classic statistical questions such as inventory management and demand forecasting. However, over the last 10 years, researchers have learned that image recognition, driving, and translation may also be framed as prediction problems. As the range of tasks that are recast as prediction problems continues to grow, we believe the scope of new applications will be extraordinary. The key challenges for executives will be (1) shifting the training of employees from a focus on prediction-related skills to judgment-related ones; (2) assessing the rate and direction of the adoption of AI technologies in order to properly time the shifting of workforce training (not too early, yet not too late); and (3) developing management processes that build the most effective teams of judgment-focused humans and prediction-focused AI agents.

Neuroscientists rethink how the brain recognizes faces

Monkeys can recognize faces thanks to a suite of neurons that identify particular facial features. Credit: Solvin Zanki/naturepl.com

People can pick a familiar face out of a crowd without thinking too much about it. But how the brain actually does this has eluded researchers for years. Now, a study shows that rhesus macaque monkeys rely on the coordination of a group of hundreds of neurons that pay attention to certain sets of physical features to recognize a face

The findings, published on 1 June in Cell, clarify an issue that has been the subject of multiple theories but no satisfying explanations (L. Chang and D. Y. Tsao Cell http://dx.doi.org./10.1016/j.cell.2017.05.011; 2017).

“The real cartoon view has been that individual cells are dedicated to respond to individual people,” says David Leopold, a neuroscientist at the US National Institute of Mental Health in Bethesda, Maryland. But other theories suggested that groups of neurons worked in concert to recognize a face.

The latest results show that each neuron associated with facial recognition, called a face cell, pays attention to specific ranked combinations of facial features. “We have cracked the code,” says study co-author Doris Tsao, a systems neuroscientist at the California Institute of Technology (Caltech) in Pasadena.

A leap forward

To start, Tsao and Le Chang, a neuroscientist also at Caltech, studied the brains of two rhesus macaque monkeys (Macaca mulatta) to determine the location of the animals’ face cells. They showed the monkeys images of human faces or other objects, including bodies, fruit and random patterns. They then used functional magnetic resonance imaging to see which brain regions lit up when the animals saw a face.

The team focused on those hotspots to see what the face cells were doing. Tsao and Chang used a set of 2,000 human faces with varying characteristics, such as the distance between the eyes or the shape of the hairline, for the monkeys to view. The neuroscientists then implanted electrodes into the macaques’ brains to compare the responses of individual neurons to the facial differences.

Tsao and Chang recorded responses from a total of 205 neurons between the two monkeys. Each neuron responded to a specific combination of some of the facial parameters.

“They have developed a model that goes from a picture on a computer screen to the responses of neurons way the heck down in the visual cortex,” says Greg Horwitz, a visual neurophysiologist at the University of Washington in Seattle. “This takes a huge step forward,” he says, because the model maps out how each cell responds to all possible combinations of facial features, instead of just one.

Playing favourites

Tsao and Chang wondered whether, within the specific combination of characteristics that a face cell recognized, each neuron was better tuned to particular features than to others. They tested this idea by trying to recreate the faces the monkeys were shown, on the basis of each neuron’s response to its cast of characteristics. Based on the strength of those signals, the neuroscientists could recreate the real faces almost perfectly.

When the monkeys saw faces that varied according to features that a neuron didn’t care about, the individual face cell’s response remained unchanged.

In other words, “the neuron is not a face detector, it’s a face analyser”, says Leopold. The brain “is able to realize that there are key dimensions that allow one to say that this is Person A and this is Person B.”

Human brains probably use this code to recognize or imagine specific faces, says Tsao. But scientists are still unsure about how everything is linked together.

One message is clear for neuroscientists. “If their inclination is to think: ‘We know how faces are recognized because there are a small number of face cells that sing loud when the right face is seen,’ I think that notion should gradually go away, because it’s not right,” says Leopold. “This study presents a more realistic alternative to how the brain actually goes and analyses individuals.”

Source: nature.com, 01 June 2017