Responsible AI Has Become Critical for Business

Dec 06, 2022 6 Min Read
AI
Source:

Vector image is from freepik.com by @storyset.

Investors need to prioritise the ethical deployment of AI – too much is at stake if they don’t.

Investors, take note. Your due diligence checklist may be missing a critical element that could make or break your portfolio’s performance: responsible AI. Other than screening and monitoring companies for future financial returns, growth potential and ESG criteria, it’s time for private equity (PE) and venture capital (VC) investors to start asking hard questions about how firms use AI.

Given the rapid proliferation and uptake of AI in recent years – 75 percent of all businesses already include AI in their core strategies – it’s no surprise that the technology is top-of-mind for PE and VC investors. In 2020, AI accounted for 20 percent or US$75 billion of worldwide VC investments. McKinsey & Company has reported that AI could increase global GDP by roughly 1.2 percent per year, adding a total of US$13 trillion by 2030.

AI now powers everything from online searches to medical advancement to job productivity. But, as with most technologies, it can be problematic. Hidden algorithms may threaten cybersecurity and conceal bias; opaque data can erode public trust. A case in point is the BlenderBot 3 launched by Meta in August 2022. The AI chatbot made anti-Semitic remarks and factually incorrect statements regarding the United States presidential election, and even asked users for offensive jokes.

In fact, the European Consumer Organisation’s latest survey on AI found that over half of Europeans believed that companies use AI to manipulate consumer decisions, while 60 percent of respondents in certain countries thought that AI leads to greater abuse of personal data.

How can firms use AI in a responsible way and work with cross-border organisations to develop best practices for ethical AI governance? Below are some of our recommendations, which are covered in the latest annual report of the Ethical AI Governance Group, a collective of AI practitioners, entrepreneurs and investors dedicated to sharing practical insights and promoting responsible AI governance.

Best practices from the ESG movement

PE and VC investors can leverage lessons from ESG – short for environmental, social and governance – to ensure that their investee companies design and deploy AI that generates value without inflicting harm.

ESG is becoming mainstream in the PE realm and is slowly but surely making its mark on VC. We’ve seen the creation of global industry bodies such as VentureESG and ESG_VC that advance the integration of sustainability into early-stage investments.

Gone are the days when it was enough for companies to deliver financial returns. Now, investors regularly solicit information about a fund portfolio’s compliance with the United Nations Sustainable Development Goals. Significant measures have been taken since 2018 to create comparable, global metrics for evaluating ESG performance. For example, the International Sustainability Standards Board was launched during the UN Climate Change Conference in 2021 to set worldwide disclosure standards.

Beyond investing in carbon capture technologies and developing eco-friendly solutions, firms are being pressed to account for their social impact, including on worker rights and the fair allocation of equity ownership. “Investors are getting serious about ESG,” headlined a 2022 report by Bain & Company and the Institutional Limited Partners Association. According to the publication, 90 percent of limited partners would walk away from an investment opportunity if it presented an ESG concern.

Put simply, investors can no longer ignore their impact on the environment and the communities they engage with. ESG has become an imperative, rather than an add-on. The same can now be said for responsible AI.

The business case for responsible AI

There are clear parallels between responsible AI and the ESG movement: For one thing, both are simply good for business. As Manoj Saxena, chairman of the Responsible Artificial Intelligence Institute, said recently, “Responsible AI is profitable AI.”

Many organisations are heeding the call to ensure that AI is created, implemented and monitored by processes that protect us from negative impact. In 2019, the OECD established AI Principles to promote the use of AI that is innovative, trustworthy and respects human rights and democratic values. Meanwhile, cross-sector partnerships including the World Economic Forum’s Global AI Action Alliance and the Global Partnership on Artificial Intelligence have established working groups and schemes to translate these principles into best practices, certification programmes and actionable tools.

There’s also been the emergence of VC firms such as BGV that focus on funding innovative and ethical AI firms. We believe that early-stage investors have a responsibility to build ethical AI start-ups, and can do so through better diligence, capital allocation and portfolio governance decisions.

The term “responsible AI” speaks to the bottom-line reality of business: Investors have an obligation to ensure the companies they invest in are honest and accountable. They should create rather than destroy value, with a careful eye not only on reputational risk, but also their impact on society.

Here are the three reasons why investors need to embrace and prioritise responsible AI:

1. AI requires guardrails

One only has to look at social media, where digital platforms have become vehicles that enable everything from the dissemination of fake news and privacy violations to cyberbullying and grooming, for a taste of what happens when companies seemingly lose control over their own inventions.

With AI, there’s still an opportunity to set rules and principles for its ethical use. But once the genie is out of the bottle, we can’t put it back in, and the repercussions will be sizeable.

2. Regulatory pressure imposes strong consequences

Governments worldwide are tightening digital regulations on online safety, cybersecurity, data privacy and AI. In particular, the European Union has passed the Digital Services Act and the Digital Markets Act (DMA). The latter aims to establish a safe online space where the fundamental rights of all users are protected.

The DMA specifically targets large platforms known as “gatekeepers” (think search engines, social media and online marketplaces), requiring them to be transparent in advertising, protect data privacy and address illegal or harmful content. Coming into effect as soon as 2023, the DMA can impose fines of up to 6 percent of annual sales for non-compliance, and as much as 20 percent for repeated offences. In extreme cases, regulators may even disband a company.

In a recent study on C-suite attitudes towards AI regulation and readiness, 95 percent of respondents from 17 geographies believed that at least one part of their business would be impacted by EU regulations, and 77 percent identified regulation as a company-wide priority. Regulators in the US and Asia are carefully following the progress made in Europe and will surely follow suit over time.

3. Market opportunities

It has been estimated that 80 percent of firms will commit at least 10 percent of their AI budgets to regulatory compliance by 2024, with 45 percent pledging to set aside a minimum of 20 percent. This regulatory pressure generates a huge market opportunity for PE and VC investors to fund start-ups that will make life easier for corporates facing intense pressure to comply.

Investors wondering about AI’s total addressable market should be optimistic. In 2021, the global AI economy was valued at approximately US$59.7 billion, and the figure is forecast to reach some US$422 billion by 2028. The EU anticipates that AI legislation will catalyse growth by increasing consumer trust and usage, and making it easier for AI suppliers to develop new and attractive products. Investors who prioritise responsible AI are strongly positioned to capture these gains.

Read more: Push for ‘Ethical AI’ and Technology Standards

Worth the effort

The call for investors to integrate responsible AI into their investments may feel like a tall order. It requires specialised talent, new processes and ongoing monitoring of portfolio company performance. Many fund managers, let alone limited partners, don’t yet have the manpower to achieve this.

But AI’s impending regulation and the market opportunities it presents will change how PE and VC firms operate. Some will exit, shifting resources to sectors with less regulation. Others, fortifying themselves against reputational risk while balancing internal capabilities, will add screening tools for AI dangers. Still, some will see responsible AI as Mission Critical.

Awareness is the greatest agent for change, and this can be achieved through adapting best practices on ethical AI governance from the community of start-ups, enterprises, investors and policy practitioners. Those that step up before it’s too late and who proactively help shape the rules as they are being written will reap the benefits – both economically and in terms of fuelling sustainable growth.

This is an adaptation of an article published in the Ethical AI Governance Group’s 2022 Annual Report.

Edited by: Rachel Eva Lim

 

 

INSEAD Logo 2022.png

This article is republished courtesy of INSEAD Knowledge. Copyright INSEAD 2022.

Share This

Business

Tags: Digital

Alt
Claudia Zeisberger is a Senior Affiliate Professor of Entrepreneurship & Family Enterprise at INSEAD and the Founder & Academic Co-Director of the school’s Global Private Equity Initiative. She is the author of Mastering Private Equity and Private Equity in Action. Follow her on YouTube to learn more about private capital.
Screenshot 2022-12-05 at 2.00.23 PM.png

Anik Bose is a Managing General Partner at BGV and the Founder of the Ethical AI Governance Group. View full profile here.

You May Also Like

Vector image illustrating people writing down their goals

Aligning Candidate Goals with Company Goals for Stronger Retention

By Lee Nallalingham. Aligning organisational and individual goals with incoming talent can retain them longer, and overall optimise the team moving forward. Read more to find out how to recognise and shape their goals together.

Jul 17, 2023 3 Min Read

Group photo (leadership lessons)

Leaderonomics: Lessons Of A Decade

Roshan Thiran, Founder and CEO of Leaderonomics, shares the key lessons learned after 10 years at the helm.

Nov 12, 2018 24 Min Podcast

Arrow

Through the Ups and Downs of Business

In many ways, Rajesh Subramaniam is a leader whose story runs parallel to the professional life of the late Steve Jobs. As a quiet and thoughtful leader, Rajesh revealed on his appearance of The Leaderonomics Show that he had been keeping tabs on his old company even after he’d left.

Apr 03, 2019 23 Min Video

Be a Leader's Digest Reader