When whistleblower Frances Haugen revealed last year that Facebook knew about the impact of its products on teenagers and the extent of misinformation on its platforms, it was only the latest, albeit devastating, warning of the harm technologies once hailed as transformative can inflict if not properly harnessed. Lawmakers and regulators from Europe, the Middle East and the United States are preparing to enact more regulations to hold Big Tech to account, but venture capital has a unique role to play in keeping artificial intelligence ethical – today and in the future.
For not only do VC investors provide funding for start-ups and fast-growing companies in the technology space, they also offer advice and guidance to the entrepreneurs and founders behind future unicorns, i.e. start-ups valued at over US$1 billion. Last year, start-ups raised a record US$621 billion from VCs and corporate venture vehicles, according to CB Insights. By the end of 2021, there were 959 unicorns around the world, compared to 569 in 2020.
With so much power in VC hands, surely one can ask for responsibility. And indeed, institutional investors have started to ask VC partners to implement ESG (environmental, social and governance) principles in their investment mandates, similar to what they have demanded of private equity funds in recent years. But AI is arguably the most important vertical where scrutiny is needed.
Learn about the Rise of the Metaverse
AI creates enormous value potential and in turn the opportunity to achieve outsized returns. It also poses substantial risks. Recommender systems, for example, steer users to products and services that match their past behaviour online, in effect creating echo chambers and the potential for polarisation and even radicalisation.
The risk of damage cuts both ways: In the aftermath of the revelations by Haugen, Facebook reported that its user numbers dipped in the last three months of 2021 – the first decline in its history. Using tech responsibly is not only the right thing to do, but a sound business decision analogous to the ESG initiatives of organisations.
We need to build today a future digital world that is safe and beneficial for all. It is time for VC investors and founders to work together to develop a materiality index and meaningful tools to measure and assess the risks of AI-based business models, track the development of start-ups, and continue to iterate to arrive at a widely-accepted standard on “ethical AI” practices.
Meanwhile companies are adopting new practices, processes and tools to ensure, among other things, that the AI and content platforms they deploy adhere to basic principles including human rights, fairness and safety. The measures range from intelligent content moderation, AI monitoring systems, new governance structures such as AI ethics boards, as well as AI product development practices that integrate risk and responsibility throughout. Although many of these measures may only be practical for larger organisations, there are plenty of possibilities for early-stage tech ventures.
Discover: How to Develop and Implement 'Good Tech'
All other stakeholders, from consumers, civil society to regulators, need to better understand AI risks and how to manage them. Regulators are now developing frameworks to strengthen our collective guard rails. For example, the EU is drafting laws to govern online platforms and shore up online trust and safety as well as AI risk management. Lawmakers from Australia to North America are taking similar measures.
It may be time we manage technology risks as seriously as we have ESG-related ones. We must use technology responsibly and keep AI ethical – or brace ourselves for social ruptures on an even larger and more dangerous scale.
Leaderonomics.com is an advertisement free website. Your continuous support and trust in us allows us to curate, deliver and upkeep the maintenance of our website. When you support us, you allow millions to continue reading for free on our website. Will you give today? Click here to support us.