AI: Using your power for good
01AI: Using your power for good
Artificial intelligence offers abundant opportunities for customer care, automation, prediction and sustainable resource planning, yet the fear factor persists. Here’s how managers can make the most of AI, ensuring humans and ethics shape its use.
A patient is checked for signs of cancer. The clinician studies a CT scan for telltale indicators. Yet, while humans can see as many as 10 million different colors, they can only detect around 30 shades of gray. So even the best radiologist with the best eyes might look at that scan and see a normal image — but an AI algorithm looking at the same image can detect patterns invisible to the human eye. Maybe the AI detects cancer early, when it’s small, hasn’t spread and is easier to treat. And mapping this patient’s exact type and stage of cancer against a large dataset of similar patients, the doctor can then decide which treatment options have the best odds of success. And the AI can go further: the patient might not have cancer, but the AI, crunching genetic makeup, health history, family medical history and lifestyle choices, can predict its future occurrence. Clinicians can then intervene before any disease has developed, with a health plan personalized to that patient.
This is precision medicine, an exciting new field that would not be possible without AI. In the words of Dr. Mathias Goyen, Chief Medical Officer EMEA at GE HealthCare, “It’s not science fiction, it’s science fact.”
Goyen was one of several speakers who took to the stage of the Science Congress Center Munich on October 7, 2022, to discuss “AI: Using your power for good,” the theme of IESE’s Global Alumni Reunion (GAR). More than 2,000 online attendees joined some 800 people who gathered in person to hear AI experts and business leaders from Germany’s top companies explore the impact of AI on individuals, organizations, markets and geopolitics.
And while, for many people, the possibilities of AI may still seem like science fiction, Goyen urged attendees to up their “technology quotient (TQ),” the same as they would their IQ or their emotional intelligence (EQ). “As with everything in life, you can be an early adopter, embracing new technology, or you can wait and see, watching how others are doing it. In the end, you will also have to do it, but then you’ll be a follower, not an influencer. I’d like to be in the driver’s seat, actually shaping this field.”
This report shares the expertise of GAR speakers, so that you and your business not only see the future but help shape it — for good.
AI in context
Artificial intelligence: We’ve been talking about it since the 1950s, but it’s only over the last 10 years or so that AI has reached the stage where applications like Goyen describes are even possible.
Computer scientist Nuria Oliver gave a quick history. The first generation of AI involved inputting lots of procedural codes by hand, training computer systems to make decisions following sets of if-then rules and generalizable examples. This enabled companies to automate a lot of straightforward business processes.
Then, in the late ’90s/early 2000s, a new generation of AI began to emerge. Oliver credited three turning points: (1) the plethora of big data being generated through connected devices, sensors and online activities; (2) higher computational ability that has gotten faster, smaller, cheaper and more efficient over time, in line with Moore’s Law of exponential growth; and (3) the emergence of data-driven machine-learning models, where the AI, trained on enough data, is now able to make accurate inferences with less (or even self) supervision. This AI “on steroids,” as Oliver quipped, is “at the core of the revolution that we are experiencing when we talk about AI today.”
This AI “on steroids” is “at the core of the revolution that we are experiencing when we talk about AI today”
Yet there’s another big factor that has revolutionized AI — globalization and geopolitics, which have moved center-stage since the COVID-19 pandemic and the war in Ukraine. As IESE Prof. Llewellyn D.W. Thomas noted: “AI is not just a technology. AI has the ability to be profoundly disruptive across economies.”
Anyone who has tried to buy a car since 2020 can attest to that. The global shutdown wreaked havoc with supply chains and consumer demand, leading to a shortage of semiconductor chips on which everything in our wired world depends — from cars and trucks, to smartphones, TVs and countless other electronic devices. “Effectively, without semiconductors, there is no AI,” said Thomas.
Moreover, 75% of the global production of semiconductor chips comes from East Asia, with one single company — Taiwan Semiconductor Manufacturing Company (TSMC) — accounting for the lion’s share of that. Plus, the semiconductor manufacturing process requires neon gas, 60% of which is supplied by Ukraine.
Thus, we are seeing not just businesses but governments realigning their AI strategies with a renewed sense of urgency. The U.S., for example, passed the Chips and Science Act of 2022 to bring semiconductor R&D and production back home, while also prohibiting exports of semiconductor chips and related technologies to China. The European Union has its own Chips Act in the works. China, meanwhile, controls many of the rare minerals on which chip manufacturing depends, and it has upped military exercises around Taiwan. This so-called AI Chip Race has repercussions, not only for China, the United States and Europe, but for all countries deciding how they will position themselves vis-à-vis the global superpowers.
Maria Marced, President of TSMC Europe, said she could understand the pendulum-swing back to more localization. “We are too easy on supply,” she admitted, recalling how some tech manufacturers were overly reliant on supplies of nickel from Ukraine for their tooling, so when the war broke out, they couldn’t deliver because of a lack of a critical yet relatively abundant raw material necessary for components as basic as cables, and it would take months to find new suppliers.
Having said that, she also recognized an equally undeniable reality: “No single region or country has every layer integrated within itself.” Going forward, she advocated for a hybrid model that combined “the beauty of globalization with the merits of localization.” Because there is no getting around the fact that, when it comes to AI, “geopolitical collaboration is going to be crucial.”
Focusing on the good AI can do
This is the macro context the business world finds itself in as it grapples with trying to use its AI power for good. And as Inma Martinez reminded us, “AI for good” is the point. (See her interview: “We need AI that takes human wellbeing into account.”)
This point can sometimes get lost amid zero-sum geopolitical games and the dystopian portrayals of AI that pervade our pop culture. “AI is not the Terminator from the movies,” stressed Florian Deter, managing director of Microsoft Germany.
“AI is not the Terminator from the movies”
Dario Gil, senior vice president and director of research at IBM, elaborated: “I worry we’re pushing the fear associated with AI in ways that may inhibit its diffusion. In the medical profession, the possibility to provide a better diagnosis (such as that described by Goyen) is unquestionable. Yet there’s a danger of adopting a stance focused more on regulations than an opportunity framework.”
And the opportunities do abound. The neural networks of today’s AI mimic those of humans, giving businesses “tremendous computational horsepower,” said Gil. And because much of the labeling and cleaning of massive datasets has been done, companies rarely have to start from raw data but can use foundation models — the pre-trained algorithms mentioned earlier — and then fine-tune the AI according to their own enterprise needs. This means “your productivity can dramatically increase.”
Here are some specific ways companies can incorporate AI into their processes without overhauling their entire business model, as suggested by the GAR speakers.
Customer care. The use of chatbots and digital assistants to interact with customers and execute back-office tasks automatically is so established that now it’s only a question of scale and diffusion, said Gil. As the ease of such interactions grows increasingly sophisticated and personalized, we are going to see broader adoption, for everything from fast-food drive-thrus to lifesaving telemedicine for hard-to-reach patients — something that came into its own during the pandemic.
Juergen Mueller, chief technology officer and executive board member of SAP, cited a major pharma company in Asia that could not keep pace with the online orders being placed during the height of the pandemic. “They had 20 to 30 people just trying to sort through orders,” he said. In collaboration with SAP, the company deployed a simple machine-learning model to read text messages, identify which products were probably meant, and then, if the confidence level was above a certain threshold, the order was automatically submitted. “By doing that, they got rid of 95% of the human effort and got much, much faster in processing orders and bringing sanitizers and other pharma supplies to their customers at a time of great need.”
Any company could do likewise, thanks to the wealth of off-the-shelf AI models openly available today that weren’t even around two or three years ago. “You don’t need a computer science degree to use AI applications,” said Mueller. All it takes is “a little bit more complex formulas in Excel” and a keen interest, and anyone can embed or even develop their own AI — for visual inspection, text scanning, natural language processing, to name a few — into their own business processes and achieve results with little or no coding involved at all.
Any company could do likewise, thanks to the wealth of off-the-shelf AI models openly available today
Automation. Philippe Sahli, co-founder and CEO of the startup Yokoy, can attest to the ease of leveraging existing AI. (See his interview: “You don’t need a team of 50 people to get started with AI.”) By deploying AI tools at certain key moments of the value chain, Sahli was able to automate the accounts-payable process. Moreover, as customers use the tool, the AI learns from the data, making the platform increasingly accurate and efficient for all. This becomes a source of competitive advantage.
Mueller endorsed this approach: “The more data you gather about your customers or end-users, the more you can direct an AI intervention toward some aspect of what you do or produce to optimize it for them — through automation, for example — or even come up with a completely new product.”
Predicting and anticipating needs. “AI lets you do more with less,” said Deter, explaining how Microsoft uses AI for its own internal processes like sales forecasting to empower their salespeople. But it also powers products for its clients, such as a healthcare tool that predicts patient no-shows to help make healthcare delivery more reliable and efficient.
The latter is just one of many cloud-based AI solutions Microsoft offers through its Azure platform, which anyone can use as building blocks for their own enterprise needs, whether in healthcare, retail, financial services, public administration or manufacturing. Similarly, SAP offers 130 AI models embedded in all kinds of products for its business clients. “Not every company needs to have 100 data scientists nor be at the forefront of AI research to differentiate yourself using AI. You can tap into partners,” said Mueller.
“It’s our commitment to make AI easy and accessible,” said Deter, adding that some of Microsoft’s suite of AI tools are free and simple to use. Also, echoing Mueller, he emphasized that you don’t need any coding skills. “Start small, don’t be afraid, see what the power is. There’s a natural language processing API available. Then, all the ideas for building a scalable business — such as taking over a key function of an enterprise and reducing costs and handling times, like Philippe Sahli has done — are just a matter of availing yourself of the power that’s out there. It’s amazing what you can do.”
Speaking of natural language processing APIs, Gil foresaw the way we write and debug code being “fundamentally transformed,” with AI being able to translate a Java app to Python, for example, using the same techniques of autocomplete or predictive text that we currently employ for English or Spanish. “Code is just another language — the lingua franca of business. Chemistry is also a language and we’ve demonstrated you can actually make chemical predictions in industrial processes using AI the way we do in the field of natural language.”
Better resource planning. Judith Gerlach is Bavaria’s State Minister of Digital Affairs, the first such minister in Germany. At 37, she is the youngest member of the cabinet. In her role, she champions the use of tech that serves citizens’ interests. She explained how the ministry had invested millions of euros through its AI transfer program to support SMEs, citing two “fascinating” examples. One local agricultural company uses AI for smart farming, to reduce the use of pesticides and fertilizers. (Inma Martinez also highlighted this as an example of AI for good.) Another waste disposal company uses sensors to measure the filling levels of all waste containers. The AI is able to evaluate the data and ensure the optimal planning of the emptying runs, saving fuel, energy and time.
AI is able to ensure the optimal planning of the emptying runs, saving fuel, energy and time
“These two examples show an aspect that is particularly important for me: sustainability,” said Gerlach. “It is my firm belief that we can protect the environment by accelerating the use of digital technologies and, at the same time, become more inventive and more competitive: Tech for good at its best.”
Oliver Blume, chairman and CEO of Volkswagen Group and Porsche, picked up on this aspect of AI for sustainable management of resources. The most obvious use of AI for Volkswagen and Porsche is in autonomous driving and autonomous charging for its ever-expanding range of e-hybrid vehicles, as well as using predictive AI for anticipating maintenance, sales and customer needs in preordering stocks. Yet he revealed one more: using AI in company canteens to monitor consumption patterns and thereby prevent food and water waste. “In so many ways, we see big potential for AI.”
Mueller shared another example of a consumer goods company that uses AI to minimize palm oil in its products. In cases where palm oil can’t be entirely eliminated from the supply chain, the company wants to ensure that the farmers it works with are at least harvesting palm oil in a sustainable way. With SAP, they deploy blockchain technology to reliably track and trace how much palm oil is being harvested. They also use satellite images to regularly take pictures of those palm tree areas, and mathematical modeling to calculate how much palm oil can possibly be harvested and still be sustainable. “We can use data and AI to transform our companies to be more sustainable,” Mueller said.
Here’s where you come in
As these examples show, “AI is fantastic! It can really solve many problems that we are facing on a daily basis,” said TSMC’s Marced. “The real problem I see is the lack of understanding by business leaders on how to apply AI to boost the competitiveness and productivity of their companies.”
Speaker after speaker could talk about how AI is being used in manufacturing, for example, with robots doing the tooling that humans can’t. “Without AI, it would be impossible to do incredibly precise operations, like moving 2 or 3 nanometers. Business leaders get that,” said Marced.
“Yet,” she continued, “they fail to see how the same AI that makes the impossible possible in manufacturing could be used for other things like sales or regional coordination strategies, or improving productivity and innovation in tourism. They need to be more open.”
Where AI goes next is ultimately down to its human users and the limits or expansiveness of their imaginations.
Where AI goes next is ultimately down to its human users
As an article in The Goods by U.S. media company Vox noted, there are two competing visions of AI: “In the utopian vision, technology emancipates human labor from repetitive, mundane tasks, freeing us to be more productive and take on more fulfilling work. In the dystopian vision, robots come for everyone’s jobs, put millions and millions of people out of work, and throw the economy into chaos. … We often talk about technology and innovation with a language of inevitability … But that’s not really the case — there’s plenty of human agency in the technological innovation story … In the end, technology is a human creation. It’s a product of social priorities … The problem isn’t really the robot, it’s what your boss wants the robot to do.”
Or as IESE Dean Franz Heukamp put it succinctly: “There is power, and there is good, and they need to come together.” Which goes to the heart of IESE’s mission to raise up more ethically minded business leaders. And as these leaders “explore the great potential AI has for products, processes and services to improve people’s lives,” they can also “advocate for the responsible use of AI, which preserves the dignity and freedom of all human beings.”
The human element is the final yet most important piece of the AI puzzle. As important as having AI (the tech) is having AI skills — a basic proficiency in algorithms, programming, data, networks and hardware, said Oliver — combined with key human abilities like creativity and curiosity — “because you not only have to understand the new technology, you have to be creative enough to challenge it,” underscored Mueller. (See the infographic: “Train your brain for AI.”)
Here is a path to take, as recommended by the GAR speakers:
Start with data. But not all data. “We used to talk about ‘data lakes’ but for most companies their ‘data lake’ has turned into a ‘data swamp’ and no one wants to go near it because they don’t understand it and can’t do anything with it anymore,” said Mueller. “I encourage companies to work with their structured data and meaningful use-cases, and then add some customer data and you will have a very rich set of data to work with.”
Connect the data. It’s not so much about collecting data as connecting data. And not all data needs to be collected into one place. The data can be distributed across different collaborators, partners and platforms, and a federated machine-learning tool can work on top of that.
You need people with a high-level, strategic view to understand how AI could make you more competitive
Identify a process to optimize with AI. Here is where “open” managers, educated in AI thinking, are most responsible, because if you want to restructure some business process — especially the most critical steps, like shipping or pricing — then you need people who have a high-level, strategic view to understand how doing so may affect your business model and the competition, or indeed, how changing your business model with AI could help make you more competitive.
“I would pick just one or two things that are the most important part of your value chain,” recommended Mueller. “Then talk to your teams responsible for those processes. And talk to partners — there’s a lot of external help and support out there.”
Pay attention to culture. IESE professors Javier Zamora and Mireia Giné, academic directors of the GAR, both reiterated this point in their remarks. “As an engineer, I understand the love of technology,” said Zamora. “But don’t fall into the trap of starting with the tech and going in search of a problem. First, identify the problem to solve; then, decide which AI solution could be applied to it. Also remember, it’s not just the tech but also the culture and organizational capabilities that matter, in order to have meaningful conversations around the tech and what it can do for you.”
Stay people-centric. Giné commented that AI can augment or compensate for human limitations, including cognitive biases that may cloud our judgment. Yet this strength is also AI’s weakness. Trained by people, some of those biases are programmed into the machine. That’s why you need to be skilled and literate enough to question what the machine is telling you and probe its underlying assumptions. The GAR speakers insisted on consent, transparency, explainability and auditability of the data. “For all AI’s virtues, we need to be aware of the challenges and always keep the human involvement,” said Giné.
Develop an ethical compass. Above all, you need to build ethical AI frameworks, which some speakers put on the same level as human rights. Your company may need to establish an AI ethics board, which, among other things, can monitor strategic-level concerns that managers will need to watch as industrial policies and geopolitics evolve.
Zamora cited the paper “The ethical AI paradox: why better technology needs more and not less human responsibility,” co-authored by David De Cremer, of the National University of Singapore, and Garry Kasparov, the former chess grandmaster turned human rights activist. As the world focuses on digital upskilling to deal with ever more sophisticated AI, “we will need to invest more in human upskilling, and especially so in the field of ethics.” No amount of intelligent AI can substitute for human decision-makers “trained even more than ever to think through the ethical implications of decisions and be more aware of the ethical dilemmas out there,” the paper stated.
“With great power comes great responsibility,” summed up Zamora with a smile, referencing the famous Spider-Man catchphrase as he urged all managers and business leaders to go “write the future.”
“If we want AI for good, then it depends on us to make it so.”
The Global Alumni Reunion sessions are available as e-conferences via the IESE Alumni app. To watch them, simply download the app, click on Keep Learning/Lifelong Learning, select e-Conferences, then Past, and you will find all the content there.
The IESE Global Alumni Reunion in Munich gratefully acknowledges the support of its Silver sponsor Steelcase and its collaborating sponsors CaixaBank and Banco de Sabadell.
SAVE THE DATE: The next Global Alumni Reunion will take place in Barcelona on November 16-18, 2023.
Where does your company rate on the data-driven index? Find out how ready you are for data-driven transformation, in just 20 minutes.
IESE professors Javier Zamora, Josep Valor and Joan E. Ricart have prepared an online questionnaire for you to self-assess your company’s data-driven status.
All you have to do is go to www.iese.edu/data-driven-index and answer a series of questions about your company’s business model, data model and organizational model, using a scale of 1 to 5. This should take you around 20 minutes to complete. You can choose to do it in English or Spanish.
Once you submit your information, in a few days’ time you will receive a PDF comparing your company’s results with the global average of all those who have also completed the questionnaire.
In this way, you can compare your company to others in your sector and geographical area in terms of your technological, business and organizational readiness to become a data-driven company. This can help you know where best to target your resources for digital transformation.
In addition to generating a useful action agenda for your company, your answers will help IESE to develop further research on these topics, specifically the technological, business and organizational dimensions that positively contribute to digital transformation.
Why not take a few minutes to assess yourself?
READ: “Data, a critical asset for 94% of managers” at www.iese.edu/insight where you can also download the 2022 Data-Driven Index report.
READ MORE
The case study, “Learning the machine: Anovo Ibérica introduces AI in operations” (SI-207-E) by J. Zamora & J. Valor, walks through the opportunities and challenges facing a medium-sized enterprise trying to improve the efficiency of its operations using AI. This case won a 2022 Research Excellence Award by the IESE Alumni Association in its annual recognition of the best research by IESE faculty members.
Zamora has also developed a simulation for business managers, perfect for graduate, post-graduate and executive education programs. The case centers on a big bank called Millennials Bank that is experimenting with artificial intelligence. Both available in English and Spanish from IESE Publishing.
“A guide to using AI in HR for efficiency and effectiveness” suggests how to use new AI technologies in people management, as found in the book Liderar personas con inteligencia artificial by J.R. Pin & G. Stein (McGraw Hill, 2020).
WATCH THIS SPACE: IESE professors Sampsa Samila and Marta Elvira, with José Azár, Mireia Giné and Jeroen Neckebrouck, are undertaking ongoing research to understand how the adoption of AI and digital technologies may affect economic competitiveness and labor markets, especially after COVID-19. The aim is to develop actionable insights that managers and policymakers can use for steering companies and the economy through challenging times. The research project, “Intelligent management: the impact of AI adoption on firm performance and the future of work,” is being funded by the Spanish State Research Agency until 2024. Ref. PID2020-118807RB-I00/AEI/10.13039/501100011033.
Train your brain for AI
02AI: Using your power for good
Education is key. Learn more about AI and be cognizant of its virtues, as well as its limitations, so that the overall impact will be net positive for all.
Computational thinking
Not to be confused with computer skills. Being glued to your phone is not the same as knowing how it works inside-out and how it can be leveraged as a powerful tool for change. Computational thinking involves developing five core competencies that every person in the 21st century should have.
1. Algorithmic thinking Learning how to use machines to solve problems.
2. Programming Learning the language of AI, in the same way that we learn English as a business or science language.
3. Data Beyond possessing it, understanding its implications.
4. Networks Appreciating network architectures and how they interconnect and interact to make better decisions and generate feedback loops.
5. Hardware Knowing about the physical substrate on which AI depends.
Key human abilities
As important as computational thinking, we need to develop and nurture key human abilities to organize ourselves, work together, collaborate, cooperate, coexist and agree to disagree, which distinguish us from other species and which we risk losing the more we let tech mediate our interactions.
1. Critical thinking
2. Creativity & curiosity
3. Social intelligence
4. Emotional intelligence (EQ)
5. Technology quotient (TQ) Openness, whether young or old, to embrace new technology — always with a critical eye.
Ethical vision
Carefully think through the ethics of AI and decide the guiding principles, especially with regard to people. Setting up a dedicated AI ethics board can help, especially for governing your business decisions and choice of partners allied with your principles and purpose. AI ethics can build reputational and competitive advantages.
Transversal thinking
Because AI is going to have a 360-degree impact, we need 360-degree thinkers able to span diverse disciplines — so, not just more programmers, technologists and data scientists steeped in STEM subjects, but more philosophers, anthropologists, behavioral psychologists, ethicists and various other skill sets able to see the big picture and make vital links between disparate fields.
SOURCE: Based on comments by Nuria Oliver, Dr. Mathias Goyen and Juergen Mueller delivered during IESE’s Global Alumni Reunion in Munich on October 7, 2022.
Interview with Philippe Sahli
03AI: Using your power for good
Philippe Sahli is the co-founder and CEO of Yokoy. “People love hearing stories of entrepreneurs who have a dream, find a problem to solve, and launch a business. That’s exactly what happened to me, and it’s all based on AI.”
Philippe Sahli was working as a CFO when he had that eureka moment. He had personally experienced the frustrations of billing, collecting and submitting receipts that were then manually checked before accounts could be paid. He felt that the processes concerning expenses and invoices could and should be automated. AI could take care of the routine billing and flag any surprises for a human to check. In the 21st century, it didn’t seem the best use of anyone’s time to be manually checking a $3 coffee receipt.
With four likeminded co-founders, Yokoy was born in 2019. Since then, the Zurich-based software-as-a-service (SaaS) company has gone from zero to 250 employees, serves over 500 clients in 60 countries, and recently raised $80 million in second-round venture financing. Here, he explains how AI made his company possible, and what every business can learn from his experience.
AI can mean almost anything. What is AI for you?
AI isn’t a monolithic force doing everything. If you feed huge inputs of data into AI, it’s going to struggle with decision-making just as a human would, because it has been trained to make decisions like a human. But if you break a problem down into many small steps, with good data and a robust AI model, you will get more concise results. That’s what we did. We assigned the AI to do one task: deciding what kind of document it was — an invoice or a receipt? We have many AIs, which we use to solve individual things, small problems, in incremental steps, at every stage of the entire value chain, right down to the final result.
How do you steer the AI to add competitive advantage?
The most important thing is to get as much data as possible for the AI to learn better. Anyone can build an AI model. That, in itself, isn’t a differentiator for a company. One AI model may be better than another, though it would be hard to prove. What really sets a company apart is the quantity and quality of their data. Our research lab, consisting of MIT and ETH Zurich graduates, wouldn’t get very far without it. Data is core and it’s the reason why it was important for us to get as many customers as possible, because the AI learns not just from each specific customer but across all customers. Every receipt, invoice or card statement in the tool makes the tool better — for that individual customer and for everyone together.
So, having more customers — apart from being something your investors will like — is also a need imposed by the technology?
(Laughs) Yes. The investors are interested in profit but they also like data. They see the value of data. Getting a customer at any cost doesn’t always mean competing on price. Sometimes it’s more important to get the data for your long-term competitive advantage.
There are many people who are considering AI, but it’s like magic to them. Is there anything you can share that people should or shouldn’t do, to get started?
In the past, you needed a full team to build things up from scratch. My experience is that, while you obviously need some expertise, you don’t need a team of 50 people to get started. We had one person, a physicist from ETH Zurich, who leveraged all the existing software infrastructure out there. We now have a team of about 12 people, but they are still leveraging the technology that’s out there.
“You don’t need a team of 50 people to get started with AI”
If you’re going to use AI, you’re going to have to be able to answer questions about its “magical” properties clearly and honestly. When our sales team goes out to sell the product, customers have questions. They want to know where the AI is, what it does, who it is controlled by. They want to know why it’s desirable to have the company learning with their data. These are questions that, in today’s world, you’re expected to be able to answer. Bridging the knowledge and communication between the AI team and the business client is one of the biggest challenges.
On the one hand, you have an incredibly powerful new technology, and on the other, the age-old issue of customer trust.
Exactly. This is why it is essential for our AI team to be involved in our customer calls and listen to the questions our customers have. While it is a bit of shock therapy for our AI lab employees to step from research into the world of accountants, controllers and CFOs, they are quickly able to grasp our customers’ questions, adapt their communication and provide the information needed. Such interaction is essential for our clientele.
Presumably, the AI can also make mistakes in predicting or classifying. How do you ensure that this doesn’t become a serious problem?
AI can make mistakes. For example, we once had an invoice that, for some reason, charged in kilograms rather than euros, but this wasn’t specified on the invoice, except in a footnote. Having been fed on more conventional data, the AI got that one wrong but, to be fair, I think most account managers would have gotten it wrong, too. Even so, you can limit the damage of possible mistakes by setting a threshold above which documents still have to be manually checked, for example.
To be a future-ready organization, you have to use tools that keep learning and getting better. There’s no point in implementing something that stays the same or gets worse over time and needs to be replaced often. You want something that learns with you and won’t make that same mistake again.
How do you see the prospects for AI adoption in developing regions of the world?
Cost-wise, it is quite cheap to deploy AI in the developing world. The biggest hurdle will be adoption, something we already see as we work with more companies around the world. We see a difference in AI’s accuracy for different regions because, naturally, we have so many more receipts in Europe or the United States than we do for, say, Africa. But the technology costs are actually quite low, so with proper regulation (which I think is important) there is great potential there.
How important is AI to you in the short to medium term?
AI is part of our DNA. Our company wouldn’t exist without it. As AI continues to learn from new inputs, we will achieve higher levels of automation of expenses, freeing employees from having to waste their time on such tasks. We expect to reach a point where 92% of accounts payable processes and expense management can be fully automated, meaning that only 8% of invoices and receipts will ever need to be looked at by a human again.
Interview with Inma Martinez
04AI: Using your power for good
Inma Martinez, digital pioneer and AI scientist, is chair of the Multistakeholder Experts Group and co-chair of the steering committee of the Global Partnership on Artificial Intelligence (GPAI).
“I was on the internet before it had a name,” quips Inma Martinez, who has been working with digital technologies since the early 1990s when the internet was “like the Wild West.” She has been closely involved in its evolution — pioneering the development of mobile IP and streaming services; strategizing with companies on the rise of Web 2.0, big data and the digitalization of industries; and now, as an engaged scientist, formulating policies to deal with the fast-emerging capabilities of artificial intelligence.
Throughout it all, she has noticed a yawning regulatory vacuum, with governments and business leaders both scrambling to catch up. “It has taken 20 years for people to start realizing their digital rights,” she told participants at IESE’s Global Alumni Reunion in Munich. “We need our technology and our AI to take human wellbeing into account.”
It’s a message she has been taking to classrooms, as a lecturer at Imperial College London, and to governments, particularly within the European Union and now at the Global Partnership on Artificial Intelligence (GPAI), the G7/OECD agency for development and cooperation on AI. Here, she outlines what constitutes a good AI framework — and warns of the consequences of not getting it right.
What’s the state of AI deployment today?
It varies widely — between different business sectors, between different governments, between different regions of the world. Generally speaking, industry understands how AI automatization or optimization works, because they have been dealing with it for years. They may, of course, struggle to implement it, but the benefits are clear for them to see.
Governments, on the other hand, don’t always have it so clear. When meeting with them, you sometimes get the sense they have just read a McKinsey report on their way up in the elevator. Or they approach AI as if it were just another piece of software, which it’s not — it’s a multifaceted technology that’s 60 years old and continuously evolving; it’s “alive,” making it vital to understand what it is and what it’s not.
The world is increasingly divided into AI-ready countries — with Singapore in the lead, followed by the U.S., the EU, the U.K., a few other Asian economies — and then all the rest. We need to make sure there’s also a Latin American and an African presence in discussions of AI adoption and deployment. AI must not become a tool for rich countries alone. Whatever solutions we come up with, we must consider the inclusion of developing countries.
Other concerns about AI often relate to privacy or job security. Are people right to be worried?
The problems that we see in the news are growing by the day. Look at Facebook: Even after fiascos like the Cambridge Analytica data scandal and the manipulation of social media, they continue to have data breaches — not just once a year but three or four times a year, affecting hundreds of millions of people each time. And it’s clear that nobody has created an app that anyone can really trust, which may be a reason why the EU is constantly fining Google.
Part of these problems relate back to history: the unregulated environment out of which many of these technologies emerged, as well as the fact that massive initial funding for AI was mainly for military projects, so there was never any “consumer” to protect. As AI has moved into every sector and become a commercial product, we’re only just beginning to address things like consumer protections or regulatory frameworks for putting fully autonomous vehicles onto our streets.
“We need AI that takes human wellbeing into account”
In the absence of any controls, countries are moving to fill the gaps. However, as they all have different values and ideologies, we see them adopting very different approaches. China, for example, is developing and deploying facial recognition technology. In Europe, this goes against our civil liberties and wouldn’t be allowed. The U.S. is heavily influenced by lobbies that favor corporate rather than consumer interests, making it lag behind other regions when it comes to consumer rights.
Couldn’t too much regulation harm AI’s development?
Regulation is actually desirable from a competitivity point of view. In developed economies, governments and businesses are eager to invest in AI because they see the competitive advantage it can bring. In those societies, there’s a growing expectation, if not a government directive, that business strategies must incorporate sustainability, inclusion and diversity in their digital deployments, which would contribute to social welfare and guarantee transparent practices.
For example, if you’re looking to invest in a machine that can make accurate AI predictions based on MRIs or CT scans, whose machine are you going to buy? The one from a regulated environment, where the data is guaranteed and protected, and where the AI has been developed ethically and without bias? Or the machine from a vendor who guarantees none of those things?
What, for you, are the ingredients of a good AI framework?
It starts with data integrity and the assumptions you make on the data. It means guaranteeing the origin of the data and what will be done with it. And the AI has to be auditable, meaning it has to be transparent and trustworthy. The region that can develop trustworthy AI platforms and systems is going to have a competitive advantage. And at the moment, that region looks to be Europe. This is already happening with drones. The industry for unmanned aerial vehicles is flourishing in Europe because the data can be guaranteed. So, regulation represents an opportunity to become leaders in trustworthy, authenticated AI solutions.
What is the biggest AI threat we face?
That would be the flip side of everything I just said: unexplainable AI, where the origin of the data cannot be assured. AI is just a machine. If you feed it with false data, it doesn’t know that data is false, and it makes decisions according to what it knows.
The urgent need to guarantee data is why blockchain is being developed. Soon, we will see huge solutions in which blockchain and AI work together. Blockchain guarantees data transparency and authenticity. Likewise, AI is like a huge muscle to help blockchain scale up.
In addition to defining good frameworks, we need governments to start placing limits on the ways that AI can be misused.
What are some of the ways that AI can be a force for good?
An example I like to use is agriculture. Humanity is facing a huge predicament because the amount of arable land decreases every year, as do water resources. With AI, you can systematize irrigation, you know where to apply pesticides, and you know when there’s not enough nitrogen in the soil. When you “technify” something as analog and biological as agriculture, you improve yields, and you protect the land and the water. And it doesn’t do away with jobs; you still need people in the fields. Again, it’s about conceiving of AI for everyone.
As you said before, that must include developing as well as developed countries, and presumably women as well as men. Given that women continue to be underrepresented in STEM, do they risk getting left behind in the future of AI?
Finding talent is one of the major challenges to the future deployment of AI. I always try to communicate that anyone can work in the field. You really don’t need to be a super mathematician or even any kind of mathematician at all. You can be a philosopher. You can be a psychologist, helping to develop behavioral systems. AI requires a great many skills to be developed the way it should be. The hunt for talent is on.
AI is transformative: are you?
05AI: Using your power for good
By Sampsa Samila
The IESE Global Alumni Reunion brought home the point that, as a general-purpose technology with a wide range of applications, artificial intelligence (AI) allows business leaders to obtain impressive results very quickly.
However, I would argue that the real benefits of AI do not materialize without real transformation.
As with all prior general-purpose technologies, whether steam or electric power, the improvements in core processes and new business models yield their full benefits only with significant changes to the organization of the company. Hence, to take full advantage of AI, it’s important to understand its full properties.
AI doesn’t just improve the speed, efficiency and accuracy of core processes. It also enables businesses that used to be hard to scale and local in scope to be scaled for global operations. And in making it faster and easier to grow your company to serve more customers globally, the rest of your business may have to be redesigned accordingly.
Take the example of software updates. Running a routine software update overnight could well improve the core processes of your AI-driven company across the entire world. But how should it be implemented?
Centralizing the decision of when and how to update would add unnecessary delay and complexity, whereas empowering the people in charge of those processes to update them when they think best would be faster and more efficient. In this sense, AI-driven companies benefit from increased autonomy and decentralized decision-making. This, in turn, may require a more modular organizational structure.
However, the team interfaces need to be clearly defined so as to lower the chances of conflict between different autonomous units. And to ensure each team is working in the same direction, more time will need to be spent communicating the corporate vision, mission and values very clearly. Shared beliefs and understandings — corporate culture, in other words — become essential to enable coordination.
Thus, from a “simple” process improvement, AI raises a host of other factors and issues that invariably involve the entire company.
Or take positive feedback loops, whereby as you serve more customers, you collect more data, which the AI learns from and the algorithms keep getting better, making it more likely you will get more customers, and the cycle repeats. But to reap such benefits requires an intentional collection of data, a pooling of all feedback across all uses of the algorithm from around the world, and then a redistribution of the improved algorithm back out into the world. Again, this requires coordinated organizational processes able to gather, incorporate, respond and deliver quickly.
Underlying all AI applications is data, a key asset for every company today. To enable innovative AI applications to happen anywhere in your company, data assets must be made widely available across the entire organization.
AI is fundamentally a management challenge, more than a technology challenge
In contrast to physical assets, there is no limit to how many people or algorithms can be using the same data asset at any given time, and data does not wear out with use — in fact, it gets better. Furthermore, the value of data depends on understanding its meaning and context, so it’s best if AI applications are developed by the people who collected the data and understand it.
However, as data from other units is needed, those units need to develop the data in a way that others can use it, as a so-called data product. This underscores the point that data development and management should not be centralized. Rather, you should create a decentralized data culture where new data products are developed and used to create new value across the entire business.
Democratized data is a powerful management tool. Collecting and sharing real-time business data enables you and your managers to understand what is going on in the company. Furthermore, if real-time business data is shared across the organization, it enables greater alignment between units, as they also understand what other units are doing, and it increases execution speed, as other units can track each other’s performance.
The ethical issues facing us as managers of AI are also considerable. Beyond data privacy and related regulation lies a world of other challenges. Algorithms relating to humans should be fair and ethical. Redesigning your organization will require reskilling employees, and not everyone will be ready for that.
The core leadership challenge is that all these areas must be tackled personally. A technology that requires possibly significant rethinking of core processes cannot be delegated to outside parties or even to others inside the company. You cannot run your business through a translator or intermediary, including a CIO. You, personally, must understand how the technology works at a conceptual level, how it can be used in your business and how best to take advantage of it.
AI is fundamentally a management challenge, more than a technology challenge. And you need to be able to lead and inspire to take the whole organization on this journey.
Sampsa Samila is Associate Professor of Strategic Management and Director of IESE’s Artificial Intelligence and the Future of Management Initiative.
This Report forms part of the magazine IESE Business School Insight 163. See the full Table of Contents.
This content is exclusively for individual use. If you wish to use any of this material for academic or teaching purposes, please go to IESE Publishing where you can obtain a special PDF version of this report as well as the full magazine in which it appears.