Pesquisa destaca companhias mais atuantes na pandemia

Com um ranking de governança corporativa que divulga hoje sua sétima edição, a consultoria espanhola Merco realizou neste ano uma avaliação paralela, listando as empresas mais responsáveis durante a pandemia em 2020. A Natura, que lidera pela sétima vez consecutiva a avaliação de governança, também foi o destaque no levantamento sobre o primeiro ano da crise da covid-19.

O Ranking Merco de Responsabilidade e Governança Corporativa no Brasil inclui, além das cem mais bem avaliadas em governança, uma lista de dez empresas em avaliação específica sobre o desempenho na pandemia. A classificação foi estabelecida a partir de respostas de públicos como diretores de companhias, analistas financeiros, governos, associações de consumidores e o público em geral.

Lylian Brandão, diretora-geral da Merco Brasil, explica que a lista principal é constituída a partir de cinco valores, como transparência, responsabilidade com os funcionários e contribuição à comunidade. As perguntas têm pesos diferentes de acordo com o público. “A primeira empresa na percepção das ONGs pode não ser a dos analistas financeiros”, exemplifica.

Apesar disso, a executiva afirma que há uma baixa rotatividade de empresas top 10 do Ranking Merco entre as edições, refletindo reconhecimento a longo prazo. “Variações maiores são registradas de público para público, mas as primeiras colocadas do top 10 estão bem posicionadas entre todos públicos de forma geral”, afirma.

Já o ranking covid-19 foi estabelecido a partir de respostas fechadas, em que a única pergunta era o nome da companhia indicada. A nova categoria representou 8% do peso das pontuações da lista geral.

O Magazine Luiza ficou em segundo no recorte da pandemia e também estreou no Top 10 geral nesta edição: o varejista saltou da 14ª para a 5ª posição no Ranking Merco entre 2019 e 2020.

A gerente corporativa de reputação e sustentabilidade do Magalu, Ana Luiza Herzog, afirma que a pandemia evidenciou as companhias que vêm propondo maior contribuição à sociedade. “Em um cenário muito extremo, nossos valores ficaram evidentes e isso fez com que a empresa desse mais concretude ainda às nossas
iniciativas, de dentro para fora”, afirma.

Mais, fonte: Ana Luizade Carvalho — Valor, 22/06/2021.

Computer Science for All?

Nicole Reitz-Larsen uses movement to teach computer science at West High School in Salt Lake City. She used to teach German and business.
Nicole Reitz-Larsen uses movement to teach computer science at West High School in Salt Lake City. She used to teach German and business.

Step into Nicole Reitz-Larsen’s classroom in Salt Lake City’s West High School and see students grooving to “Single Ladies” or zigzagging to execute one of LeBron James’s handshakes. You might think it’s a dance class. It’s not.

Reitz-Larsen is teaching computer science through movement. The former German-language and business instructor found that linking difficult concepts such as algorithms and the binary system to students’ interests helps the students grasp a topic that many were leery about before they stepped into her class.

“I’m always thinking about how to sell it to my students,” said Reitz-Larsen, who learned how to teach the complex subject in three months after administrators asked her to pioneer it at West. “You have those kids who say, ‘I’m never going to use this.’”

Young people who are glued to their phones and laptops for many of their waking hours are often apathetic when it comes to figuring out what makes their devices tick. About one of every three girls and half of boys think computer science is important for them to learn, according to a 2020 Google/Gallup, Inc., survey of 7,000 educators, parents, and students.

The finding came four years after President Barack Obama declared that computer science is as essential for K–12 students as reading, writing, and arithmetic. The announcement gave momentum to a computer-science-for-all movement and propelled industry-backed nonprofits such as to the forefront of debates about what should be taught in schools. Joe Biden, both as vice president and during his 2020 presidential campaign, emphasized his support for having K–12 students learn the subject.

The effort is part of a broader attempt to overhaul and update the U.S. education system. Proponents argue that it’s time to amend the public-school curriculum to reflect life skills demanded by the ever-changing Information Age. Such a reframing is necessary, they say, to ensure students can compete for positions focused on cloud computing, artificial intelligence, and mobile-app development.

After Obama’s high-profile endorsement of’s mission, the organization joined educators and other advocates to help persuade state legislatures to allocate millions of dollars toward new laws that advance its vision that “every student in every school has the opportunity to learn computer science.”

Some states made more progress than others. Thirty-seven adopted computer-science standards for K–12, and 20 required all high schools to offer the subject. In Nevada and South Carolina, the discipline is now a graduation requirement. New York City committed to making the subject available at every K–12 school by 2025. New rules such as these helped drive about 186,000 students to take Advanced Placement computer-science tests in 2020, nine times more than in 2010.

A 2020 report from found that 47 percent of the nation’s high schools teach computer science. Despite a growing belief among parents, administrators, and students in computer science’s benefits, and millions of dollars allocated to offering it in K–12 schools, gaps in access and participation among Black, Hispanic, and white students persist.

Today, computer-science-for-all leaders acknowledge they’ve hit a plateau and that they need more-widespread buy-in from lawmakers and educators and increased funding to overcome disparities in the U.S. education system that fall along racial and socioeconomic lines.

“Early on, we got all these early-adopter states, school districts, and teachers raising their hands, and there was a frenzy of activity. Now we’re moving into people being told to do it,” said Ruthe Farmer, chief evangelist for CSforAll, a New York–based nonprofit. “The skepticism around how we’re going to get this done is still there.”

Constraining the movement’s growth are a scarcity of well-qualified teachers, particularly in math and science, and competition for resources in cash-strapped school districts. Hard-fought progress was also stalled by the coronavirus pandemic, when states such as Colorado and Missouri reallocated or froze funding dedicated to broadening access to the subject in K–12.

At the same time, Covid-19 laid bare long-standing inequities in access to laptops and high-speed broadband connections necessary to expand availability across cultures and to English language learners, rural students, and those with disabilities.

What Is Computer Science?

There is consensus on what computer science is not—basic computing skills such as Internet searching, keyboarding, and using a spreadsheet—but no universal agreement on what it actually is. There are many different definitions, largely because decisions about what and how students are taught are made at the state, district, and school level. New York emphasizes digital literacy; Texas incorporated the discipline into its technical career standards.

Many proponents of the computer-science-for-all movement, which began in the early 2000s, spend considerable time trying to dispel the notion that it’s solely about learning coding.

Coding languages used in developing software are a tool for computer science, educators say, just as arithmetic is a tool for math and words are a tool for verbal communication. At its core, computer science is about learning how to create new technologies, rather than simply using them, advocates stress. It strives, for example, to teach students how to design the software that will make the spreadsheet.

Just as important as coding, backers add, are foundational concepts such as computational thinking. This approach to computer science provides students with a way to solve problems by breaking them down into parts, and it can be integrated across subjects as early as kindergarten.

In some states, computer-science standards overlap with math standards and involve concepts such as sequencing, ordering, and sorting. Standards can also include science concepts such as devising a hypothesis, testing it, refining it, and perhaps redesigning an experiment after “debugging.”

Just as students should learn how to read, analyze, and write text effectively, they need exposure to computer science to become informed digital citizens who understand how technology impacts their everyday lives, said Yasmin Kafai, a professor at the University of Pennsylvania Graduate School of Education.

Kafai, co-author of Connected Code: Why Children Need to Learn Programming, said that a big part of the world nowadays is the digital public sphere, “where we interface through machines.”

“We want to provide students in K–12 with an understanding of what that actually is—it’s a designed world, and it makes a difference when you understand how it’s designed,” she added. “It helps to understand its limitations.”

Such skills might help young people feel comfortable working with large amounts of data and empower them to push back against the negative impacts of technology.

After defining what computer science means for their districts, administrators need to decide what outcomes they hope to achieve for their students, advocates say. They acknowledge that in the early years of computer-science education, they overemphasized its role in training future programmers. With a shortage of tech workers in many regions, workforce development has been a powerful argument for offering computer science.

Jobs in computer and information technology are among the best paying in the United States, with the median annual salary for these occupations clocking in at $91,250 in May 2020, more than twice the median annual pay for occupations overall. The U.S. Bureau of Labor Statistics projects that such jobs will be among the fastest growing in the next decade. Yet the vocational approach to computer science turns off some administrators, who believe that K–12 education is more than just training young people for jobs.

“We’ve been using the workforce argument a lot when we talk about expanding computer-science education,” said Leigh Ann DeLyser, co-founder and executive director of CSforAll. “Yet the national survey of school administrators from NCES shows that less than half of school administrators see workforce development in the top three priorities for student education. We were putting out a message that was completely mismatched from what administrators thought was the purpose for kids to be in school.”

CSforAll has worked with more than 146 public school districts serving about 2 million students to conduct mapping exercises that helped administrators shape their computer-science curricula to match their school’s vision for what their students should get out of the subject.

Just like states’ definitions of computer science, visions that undergird state standards vary widely. In Nevada, it’s about civic engagement. In Indiana, school reform. In North Dakota, cybersecurity.

These values and others expressed in state guidelines, such as equity, literacy, innovation, and personal fulfillment, are key to developing curricula that appeals to all students, according to a CSforAll study.

Curriculum Choices

Computer-science curriculum choices abound, with both commercial programs and free options available. While apparently no one tracks which curricula are used most often, the free introductory courses offered by are very popular and currently used by about 1.3 million teachers.

Schools have also widely adopted curricula offered by Project Lead The Way, Codelicious, and Code Monkey. Some curricula feature easy-to-use block-coding programs, such as the MIT-developed Scratch and Google’s Blockly, that allow programmers to drag and drop blocks containing instructions to create animated stories and games.

Some curricula integrate computer science into other subjects. Bootstrap aligns programming concepts in game design with algebra. Project GUTS helps students create scientific models using web-based software.

Advocates suggest it’s best to cultivate students’ interest in computer science in elementary school, citing research that the earlier children are exposed to the subject the more likely they are to want to take it in middle and high school. Teaching computer science in younger grades is still not common, however.

To broaden access to the subject for high school students, researchers developed more basic curricula. Exploring Computer Science, which includes web design, data analysis, robotics, and programming through Scratch, is used by districts in Los Angeles, Spokane, Chicago, and New York City, among others.

Another course, AP Computer Science Principles, was designed, like Exploring Computer Science, in part to interest more women and minorities in the discipline. Teachers can use a variety of curricula to teach the AP course that includes lessons on how to design and program “socially useful” mobile apps, write and talk about ideas, and collaborate with peers. In 2016–17, the course’s first year, AP Computer Science Principles attracted more students than any other AP course debut in history.

Even as more schools and teachers use such wide-ranging curricula, determining their quality is difficult, noted Allison Scott, chief executive officer at the Kapor Foundation, an Oakland nonprofit that researches diversity in technology. “I think there is still a lot we don’t know about the effectiveness of computer-science curriculum overall, due to a few key challenges,” Scott wrote via email, “including the lack of consistent assessments for computer-science courses and the lack of information on the curriculum landscape.”

A December 2020 report from the College Board found that students who took computer-science principles were three times more likely to choose the major in college than peers who didn’t take the course—16.9 percent versus 5.2 percent. Even so, the nonprofit has no information on which curriculum was used in AP classes—it endorses a range of options for teachers to choose from—and whether any resulted in better outcomes, Scott wrote.

Researchers who study how computer-science curricula is used in elementary and middle schools found that teaching approaches range from very scripted lessons to open-ended ones where students are asked to create projects on a blank page.

“In our study, we found really big gaps in learning—for some kids you give them a blank screen and they are not going to push themselves,” said Diana Franklin, an associate professor in computer science at The University of Chicago. “The way people are teaching and the curriculum they use is not sufficient—there is room for improvement.”

Instead of randomly clicking on blocks in an open-ended approach to coding, she said, students first need to be given an example project that uses prompts to walk them through the steps of programming something on the screen and that requires them to write down their observations and predict what will happen with each step.

To track students’ progress and understanding of the material, the discipline needs written assessments that are validated, Franklin said. Such tests would allow schools to publish computer-science successes, she added. To help students score well on such assessments, schools would be incentivized to improve their curriculum, she added.

Source: Program on Education Policy and Governance Harvard Kennedy School More, link

Pandemia fez com que as pessoas pegassem ‘gosto’ pela criação de conteúdo, diz YouTube

Marília Mendonça em live
A cantora Marília Mendonça durante live Foto: Reprodução YouTube/ Marília Mendonça

A criação de conteúdo em vídeo por pessoas comuns é um caminho sem volta. Pelo menos, essa é a visão do YouTube para os próximos anos. Para Kevin Allocca, que é o líder da área e cultura e tendências da rede social de vídeos da Alphabet, dona do Google, a pandemia fez com que as pessoas perdessem a timidez para fazer vídeos próprios e a tendência é que isso continue mesmo após as medidas de distanciamento acabarem no mundo, inclusive no Brasil

Segundo Allocca, a criação de conteúdo em vídeo fez com que as pessoas se sentissem mais conectadas umas às outras em um período tão complicado. “As pessoas estão criando cada vez mais vídeos e isso se tornou uma ferramenta cada vez mais importante na vida delas”, diz Allocca. “Essa experiência criou todo um senso de comunidade.”

Em uma pesquisa realizada pela consultoria Ipsos, a pedido do YouTube, 57% dos respondentes afirmaram que criar um conteúdo em vídeo os ajudou a se sentir mais conectado com outras pessoas. Esses dados, que foram obtidos com exclusividade pelo Estadão, serão apresentados por Kevin Allocca em conteúdo que faz parte do Cannes Lions – Festival Internacional de Criatividade, evento que tem o Estadão como representante oficial no País.

Febre das lives

As lives, que fizeram bastante sucesso no início da pandemia, mas que perderam um pouco de força com a extensão das medidas de distanciamento, também tiveram papel importante nisso. Mesmo assim, segundo a pesquisa, 64% das pessoas no Brasil se sentiram menos sozinhas ao assistir shows ao vivo de grandes artistas.

O Brasil, aliás, ganhou um grande destaque com shows que bateram recordes de audiência em todo o mundo. O caso mais notório foi o da cantora sertaneja Marília Mendonça, que, mesmo cantando em sua própria sala de estar, chegou a ter uma audiência de 3,3 milhões de pessoas simultaneamente no pico da transmissão.

Essa simplicidade, aliás, pode ter sido um dos pontos a favor da cantora. Segundo o relatório de tendências do YouTube, as pessoas começaram a procurar conteúdo que se identificassem. Isso no mundo inteiro e até mesmo com alguns conteúdos bem exóticos. Um exemplo é que houve vídeos de casamentos coletivos na Coreia do Sul que ultrapassaram 1 milhão de visualizações.

Mas esse tipo de conteúdo continuará em alta mesmo após as pessoas voltarem a ter uma vida mais normal? Para Allocca, sim. “Essas tendências estão continuando e o que a pandemia fez foi acelerar esse processo”, afirma o executivo.

Não à toa, vídeos de podcasts baseados em conversas mais informais também têm dado muito resultado, segundo o YouTube. Um exemplo é o Flow Podcast, apresentado por Bruno Aiub, o Monark, e outro por Igor Coelho, o 3K, que amealhou fãs durante a pandemia. Desde janeiro de 2020, o Flow ganhou mais de 200 milhões de visualizações, de acordo com a rede social.

Além disso, até vídeos de viagens passaram a fazer mais sucesso durante a pandemia. Mas um tipo de viagem diferente. O canal Lucas MotoVlog, de Lucas Santana, ganhou mais seguidores durante a pandemia e com um tipo conteúdo bem simples e inusitado: Santana em cima de sua moto fazendo trilhas. Os vídeos desse tipo na plataforma receberam mais de 40 milhões de visualizações no Brasil desde o começo do ano passado.

Alta de receitas

Vídeos como esses e as pessoas presas em casa por causa da pandemia fez com que o YouTube faturasse muito mais em 2020 do que no ano anterior. A receita gerada por publicidade na rede social da Alphabet no ano passado cresceu quase 45%, para US$ 6,8 bilhões. E esses valores devem continuar sendo incrementados, pois cada vez mais o YouTube tem sido destino de campanhas – até mesmo exclusivas.

“Hoje, na publicidade, não existe mais fazer uma campanha para TV aberta, paga ou internet. Tudo é vídeo. Inclusive, já existem campanhas que começam no YouTube e só depois vão para a televisão”, diz Edu Lorenzi, CEO da Publicis.

Agora, o YouTube quer também arrancar um pedaço das receitas que rivais como TikTok e Instagram estão conseguindo com vídeos curtos. Em maio, o YouTube começou a lançar o seu Shorts, que são os vídeos de até 60 segundos e que serão reunidos no mesmo espaço. Trata-se de uma boa ferramenta para canais serem mais conhecidos.

“É um tipo de vídeo que cresce muito e o YouTube está investindo muito nisso”, diz Allocca. “Queremos ajudar as pessoas a serem mais criativas.”

Fonte: André Jankavski, O Estado de S.Paulo, 21 de junho de 2021.

3 Questions for Emi Makino, Author of Innovation Makers: How Campus Makerspaces Can Empower Students to Change the World

Bruce Rosenstein
Bruce Rosenstein
Author, Editor, Speaker, Blogger

The importance of large-scale innovation has been underscored recently by the Endless Frontier Act, currently under consideration in the United States Congress.

It’s been written about recently in The Economist, and in a Washington Post opinion piece by columnist George S. Will. Smaller-scale innovation is important too, as demonstrated in the recent book Innovation Makers: How Campus Makerspaces Can Empower Students to Change the Worldby the Japanese author Emi Makino, an Associate Professor at Hiroshima University.

The concept of campus makerspaces is pretty much under-the-radar, and the book should help readers get a better sense of the opportunity these spaces provide for students and faculty, especially when more people return to college campuses worldwide.

The book itself exists because of an innovative process: it grew out of a Georgetown University course Emi took (via Zoom) on content entrepreneurship, taught by adjunct professor and serial entrepreneur Eric Koester. The writing, publication and fundraising (in this case through Indiegogo) of the book was the whole point of the course. Coincidentally, years earlier, Emi was a student at Georgetown, during one of her various stints living in the United States.

I’ve known Emi since 2009, when I met her during the Drucker School’s events marking the 100th anniversary of Peter Drucker’s birth. She was then an MBA student, and later received her Ph.D. at the school. In the book, she describes her varied, worldwide career, in which before she became an academic, she was a journalist and a translator. It was in the latter role that I initially met her: I introduced myself after witnessing her excellent translation of a talk by Masatoshi Ito, Drucker’s longtime friend, and the founder and honorary chairman of Japan’s Seven & i Holdings Co., Ltd.

Drucker followers will find Emi’s book to be particularly interesting. Although she came to campus after Peter’s death, she was mentored by his wife of 68 years, the late Doris Drucker, who is written about extensively in the book. Emi also studied with the famous Mihaly Csikszentmihalyi (best known for the concept of flow), when he was teaching at the Drucker School. “One of the things I learned in Csikszentmihalyi’s class,” Emi writes, “is that the structure of work is highly compatible with the conditions that enable the flow experience.”

Emi and I will (separately) be contributors to the Global Peter Drucker Forum’s upcoming virtual event, A Day of Drucker. She will be the Chair of the session “How Management Ideas Have Impact: The Case of Drucker in Japan.” One of her panelists, Drucker School MBA Katsutoshi Fujita, Founder and CEO of  Project Initiative Co., and author of Real Management Learned at the Drucker School, is another longtime friend of mine.

In 2013, Emi accepted my invitation to write a guest passage in my second book, Create Your Future the Peter Drucker Way. In this post, she answers my questions about her career, the makerspaces phenomenon, and its implications, especially for the role of libraries in furthering innovation.

In your teaching and research, what are the commonalities/relationships you have observed between makerspaces/physically making things (whether on campuses or elsewhere) and major trends in recent years of invisible/intangible assets and intellectual property?

The internet has democratized the production and distribution of products and services that can be digitized, such as entertainment, design, and software. One no longer needs to make significant capital investments in order to gain access to the means of production.

Even kids can produce and broadcast professional-quality videos with just a smartphone. However, the democratization of manufacturing is only just emerging. It is coming, but not quite as quickly as some had anticipated. Making and distributing physical things is challenging and still quite expensive despite the promise of digital fabrication. Once the makerspace equivalent of Amazon’s AWS emerges, that’s when the digital tsunami will hit the old guards in manufacturing. 

Within the book you mention a number of cities around the world in which you have lived/worked/studied/taught. Has this diverse experience had an impact on the way you write, teach, and conduct research?

Yes. Living and working in so many places has helped me to better appreciate the complexities in the world around us. It has also helped me to question what is, and to continuously strive towards a better future.

Makerspaces have become important in libraries. What role do you see in the future that libraries (public, academic, or corporate) could play to strengthen innovation and entrepreneurship, and education about these subjects?

Libraries will play a critical role in democratizing manufacturing. Books were an essential source of empowerment as we transformed into a knowledge-based society in the 20th century. But knowledge alone cannot produce physical products. Unlike bits, atoms are subject to the law of physics. The “last mile” in the manufacturing supply chain has yet to be connected digitally to homes. That’s why libraries are so important. The physical act of making things is a powerful way to learn and practice entrepreneurship. The more people have access to equipment, the greater the chance for innovative solutions to wicked problems to emerge.

Source: Bruce Rosenstein, Author, Editor, Speaker, Blogger.

Anitta ganha cadeira no conselho de administração do Nubank

Segundo o Nubank, Anitta participará de reuniões trimestrais com os demais conselheiros e com a diretoria da fintech, e discutirá decisões estratégicas do negócio
Segundo o Nubank, Anitta participará de reuniões trimestrais com os demais conselheiros e com a diretoria da fintech, e discutirá decisões estratégicas do negócio

A cantora Anitta ganhou uma cadeira no conselho de administração do Nubank, em uma aposta do banco digital nos conhecimentos da artista em marketing e na construção de marcas no mundo digital. Anitta, que nos últimos anos expandiu sua carreira para a América Latina e os Estados Unidos e tem dezenas de milhões de seguidores nas redes sociais, ocupará uma das sete vagas do órgão colegiado.

Segundo o Nubank, Anitta participará de reuniões trimestrais com os demais conselheiros e com a diretoria da fintech, e discutirá decisões estratégicas do negócio. O banco digital espera utilizar-se dos conhecimentos da cantora para interagir com o público que não tem acesso aos serviços financeiros tradicionais, mas que tem familiaridade com ferramentas digitais. Com 40 milhões de clientes, o Nubank cresceu exponencialmente desde sua criação, em 2013, justamente pelo foco nesse filão de mercado.

Anitta afirma ter aceitado o convite do Nubank pelo direcionamento dos produtos da fintech. “É muito chato e constrangedor não conseguir ter acesso a produtos financeiros. Muita gente na América Latina sempre viveu de emprego informal. Como essas pessoas vão ter histórico de crédito?”.

Com a chegada de Anitta, o banco agrega ao conselho um nome conhecido nas redes sociais, com público jovem e apelo internacional. A cantora deve lançar nos próximos meses o álbum “Girl from Rio”, direcionado ao mercado internacional, em especial o americano, mas já faz sucesso fora do Brasil desde meados da última década. O Nubank também mira os Estados Unidos: deve fazer sua estreia no mercado acionário americano nos próximos 12 meses.

“Anitta tem profundo conhecimento do comportamento dos consumidores nesses mercados que tem explorado e tem muita experiência em estratégias de marketing vencedoras”, diz o presidente executivo e fundador do Nubank, David Vélez. “Essas competências foram chave para a convidarmos para o Conselho. Nenhum outro conselheiro possui essa experiência.”

O conselho do Nubank conta hoje com Anita Sands (ex-UBS), Jacqueline Reses (presidente do conselho consultivo econômico do Fed, o Banco Cental americano), Daniel Goldberg (ex-Morgan Stanley), Luiz Alberto Moreno (ex-BID), Doug Leone (da Sequoia) e com o próprio Vélez.

O Nubank chegou recentemente aos 40 milhões de clientes de seus produtos como o cartão de crédito sem anuidade e a conta digital sem tarifas, e afirma ser o maior banco digital independente do mundo. No início do mês, a instituição recebeu investimentos do megainvestidor americano Warren Buffett e do brasileiro Luis Stuhlberger.

O mercado interpretou o aporte como uma chancela ao Nubank, dados os perfis de investimento dos dois gestores, mais propensos a entrar em empresas consolidadas e com viabilidade de negócio comprovada do que em startups.

Fonte: Matheus Piovesana – O Estado de S. Paulo, 21/06/2021 

Rita McGrath & Ram Charan

Your edge, your future

There’s no turning back, and no reason anyone would want to: The edge, bursting with useful data, is the future. It’s becoming the dominant source of all enterprise data. Its computer capabilities, combined with ever-advancing artificial intelligence, are growing at a fast clip. And add the economic forces at play—the expectation of computers doing more from afar, whether that’s at a manufacturing site, a retail store, or inside your car—and we have an ever-evolving world of possibilities.

The upshot is that there doubtless will be far more aggregate computing happening at the several layers of the edge than exists in the data center today. There will also likely be as much networking, albeit in a different form. In fact, five years from now, we’ll look back at this pivotal time in enterprise computing and it will seem intuitively obvious that IT organizations had to move their focus from the data centers that dominated corporate computing for five decades.

There is a parallel of sorts. The Internet technologies commercialized during the dot-com boom in the late 1990s and early 2000s forever changed the scale of compute, storage, and networking in the data center as well as the technologies deployed to create that scale. Eventually, big data entered the picture and, combined with statistical and neural network software, allowed machine learning—envisioned three decades ago but impossible with the small datasets and puny parallel processors that existed until about a decade ago—to actually work.

With ML and AI, everything changes. Devices of all kinds gather up telemetry so they can help manage all aspects of themselves and deliver insights that compel either our action or that of another device. Put another way, all this alters the nature of the relationships that companies—and people—have with the world around them.

The edge is about scope. Instead of massive banks of compute and storage encapsulated in a data center, we have a swarm of orders of magnitude more computing elements out there, right where the real world is happening. Pushing this IT infrastructure to the edge is necessary because too much data is generated “out there,” which makes relying on data centers costly and ineffective because things wouldn’t happen fast enough.

The edge is about action

The edge will have precisely enough ML intelligence to turn live data streams, not large datasets, into some sort of work. The edge is about action, propelled by machine learning intelligence that was initially constructed in the data center and set free to roam the networks being stretched, quite literally, to cover the entire world. The edge devices may do their inferring and, perhaps someday soon, their own training right where the world is happening.

We are on the precipice of yet a new technological era. Today, of course, companies are not starting from scratch as they did during the initial computerization of the back office; there is no time for that. The people running companies need actionable insight, as the common phrase goes.

Everything outside of the data center—billions and perhaps someday trillions of devices—is being equipped with monitors of all kinds and networked to AI-enhanced compute. Data is generated in massive streams at speeds entirely too much for human beings to process.

But human capabilities aren’t the point; the machines can run the show. These edge systems are about taking action, right there, right now. And that requires a new architecture, a new way of thinking, and expertise.

Intelligently building your intelligent edge

It’s relatively easy to drop what amounts to a baby data center in a remote location. But any time you have more than a dozen applications running in a remote location and then many remote locations on top of that, IT organizations have to formalize the people, processes, and technologies.

Take the retail industry, which has had remote processing for running transactions and managing inventory for decades. Now, retailers are installing video cameras in their facilities to do pattern recognition and look for fraud within store aisles or at the self-checkout stand. The systems to do that are special servers driven by AI and supported by GPU processing. There are about a half dozen different pattern recognition applications, plus other sensing applications for asset management (including inventory and people), plus the point-of-sale applications already in place.

And all of the processes employed in the management of centralized IT—software release management, remote configuration and monitoring of hardware, change ticketing for tech support, remediation, and on and on—have to be applied to the edge.

That’s why the edge is a more diffuse and potentially larger scale problem than the data center. The issues are more complex. And there will be many layers of edge on top of that, with infrastructure out there in the real world at the literal interface between people and things, but also points of presence and other kinds of aggregation edges that are not really part of the data center at all. All of these facts can catch IT organizations off guard if they just dive in.

The cultural shift will be just as jarring. Building data center infrastructure as we all know it—dense servers clustered to deliver scalable compute and storage over closed networks spanning a data center plus some legacy monolithic systems—will become an esoteric art as the focus of IT shifts from the data center to the edge.

The problem that IT organizations face will shift from “How do you stand up a room full of infrastructure to run 2,500 different applications?” to “How do you automate all kinds of interactions with people and machines and facilitate other kinds of infrastructure and deliver an experience or manage a process, end to end?”

Here is the other tricky bit. All edges are relative. Running some aspects of infrastructure in an intermediate place like a public cloud—perhaps in an aggregation layer with lots of analytics—makes it an aggregation edge relative to the further edge out there in the world. With the public clouds having hundreds of regions and then additional points of presence feeding into them, it is reasonable to expect that some of this capacity will be deployed as aggregation edges for expediency and data sovereignty reasons.

However, latency requirements for processing data in real time or near real time means that compute should reside at the furthest edge. Also, there is not enough bandwidth in the world—not even in future 6G networks—to physically move all data back up to a centralized data center for processing. This demands that storage be local and some data be deleted after a period of time.

Back to the video surveillance in the self-checkout line at a retailer. If someone scans the wrong barcode on an item, or forgets to scan one, the retailer would want to know immediately and send over an employee monitoring the queues to fix the issue. It has to happen in one second, maybe two seconds. Sending a video stream, even at 10 frames per second, up to a public cloud region will take tens of seconds, maybe longer, and while processing that could be fast given the scale of the infrastructure, the shopper will be driving home before the retailer figures it out.

A certain amount of compute, storage, and networking needs to be immersed in the physical environment because processing within that environment, instantly, is the whole value proposition of the edge. What is true of video surveillance in a self-checkout line at a retailer is equally true of a motherboard assembly line that has surveillance monitoring the quality of solder joints or a car manufacturer looking at its shop floor. Mistakes are costly everywhere in business, and catching them early, and often, means not losing time or money—or both.

Here is another subtle point to consider: Edges never exist by themselves. There are myriad applications that will use sensors and AI at the edge to control that physical environment in some way. There will be many different sensors and platforms that vendors will try to vertically integrate, but these will by necessity exist side by side in physical locations like stores, warehouses, factories, airports, and train stations. The edge is really a network of edges, and they will have to be managed in the aggregate in some fashion. Every edge device cannot have its own way of doing this because it is inefficient.

It remains to be seen who will do the installation and the management of these edge devices. It could be the companies using these devices. Or it could be the vendors that supply them and the companies merely provide a “data room” with a controlled environment where all of this stuff plugs in and the vendor manages it remotely.

System integrators will no doubt emerge to weave together different edges for specific situations and offer to manage the whole shebang. And customers will want as much of this infrastructure to look and feel like it is on-premises data center gear and yet be offered in a cloud-like subscription model. In the coming years, edge compute should comprise as much as 20 percent of server installations (using a much looser definition of what constitutes a server than is common today), and over time, it could grow to be two to three times the aggregate compute capacity across the data centers of the world.

Pushed to the edge

Here are five things companies looking at deploying edge computing need to think about as they architect their platforms:

  • Connectivity: There is no such thing as pure hardware anymore. Everything is connected, and at the edge, everything starts with the sensors in touch with the physical world, no matter what they are, and works its way backward from there. And things have to work locally even when the connectivity up the chain of data and processing doesn’t.
  • Autonomy: Most factories today already have some form of automated guided vehicles—what some might call robot forklifts—but AGVs are not limited to that form factor. They used to need network connectivity to function, but with machine learning techniques, they can continue operating safely without that connectivity. Or take mining operations. Sensors on motors in drilling equipment are used to show that a drill bit is about to break or has broken. This could be even more useful. The same telemetry coming off the drill bit can be monitored and run through AI algorithms to warn miners that they are hitting a particularly hard kind of rock before the bit breaks. This saves the bit and the motor and the downtime associated with the bit breaking.
  • Latency and high-volume data: The ability to make split-second decisions is necessary out there on the edge, and like a lunar or Mars rover, decisions have to be made locally, in real time, because it can’t take minutes to make a decision because of network latency. Similarly, if it takes high volumes of data to make a decision, and that data can’t be moved efficiently or in a cost-effective manner, that also forces decisions to be made and actions to be taken at the edge.
  • Complex event processing: In the drill bit example above, what is really at work is complex event processing across multiple sensors, all in real time. There are vibration sensors in the drill as well as gauges that look at the energy input to and current draw by the motors. Or in augmented agriculture, farmers will take the historical data from the national weather centers and mash it up with sensors at the edge to get real-time conditions in the actual fields and figure out when to sow specific plants. The more sensors, the better the assessment of the situation and the better the action that is taken out on the edge.
  • Cloud avoidance for data: No one is avoiding putting data onto the cloud because the storage is not good storage. Rather, there’s just going to be far too much telemetry to keep, and it’s going to be far too expensive to store and even more expensive to use—even at the relatively cheap, and declining, prices of cloud storage. The data volumes at the edge will grow faster than the price of data storage on the cloud comes down, so storing edge data and processing it in the cloud is a losing proposition from the get-go. Here is a great example: autonomous vehicles. If you are training models for self-driving cars, you need to gather lots of information from real-world conditions to train those cars—typically dozens of cameras and hours of driving, which is petabytes of data. Even if you could upload that data to the cloud for storage, moving to an AI cluster to do training routines would cost a fortune. The easiest thing to do is to build a replaceable storage array and drive it in the car to the data center and swap it out.

The edge is competitive, and companies need a competitive edge. That means building an edge strategy that is focused on achieving very specific results, not installing technology for its own sake. As a starting point, it means doing projects that take data and telemetry from many sources in the physical world—and do so in a secure fashion—and then applying machine learning models to them to take automated action in the physical world.

Source: Lin Nease, Chief Technologist, IoT, Hewlett Packard Enterprise