Introduction 

The internet changed everything. Although few actually understood how it works, nearly everyone understood that it would revolutionize life. Fewer still contemplated the negative side effects it would one day pose. Without much in the way of restraint, the internet’s growth was swift and unbridled. As the modern world embraces another game-changing technology, armed with the benefit of hindsight and years of study on the internet’s harmful side effects, society by and large is much more cautious about the development and application of artificial intelligence (AI)—especially generative AI.

Though it is still in its infancy, we seem to have a much greater understanding of the risks inherent in generative AI than we had when the internet was a similarly fledgling technology. Our collective anxieties about where an adaptive and autonomous technology might lead us have steered us to the development of the emerging field of “Responsible AI.” Because most of the existing processes and tools, from code development to risk management, were designed for traditional software systems, they struggle when dealing with generative AI systems, being ineffective at managing emergent risks and preventing harmful outcomes. Responsible AI will be critical in responding to such challenges and delivering trustworthy AI systems.

Because GPs and LPs are involved in both developing and applying AI to the companies and assets they invest in, there is a vested interest in ensuring that AI is developed and deployed responsibly. We recognize the immense benefits these systems could deliver—both financial and otherwise. And while there is huge collective focus on this upside, to ensure this materializes, Responsible AI practices need to develop. Today, this is a nascent field. Our firm, through this paper and related efforts, hopes to contribute to its development. Leveraging existing ESG frameworks and expertise in value creation will be helpful to this end.

This paper provides an overview of the scope, history, global initiatives and regulatory developments of AI broadly encompassing generative AI. It introduces the concept of Responsible AI, which seeks to deliver trustworthy AI systems. The risks endemic to these systems are explored as are the newest leading AI risk-management frameworks. This paper seeks to contribute to the nascent practices of Responsible AI in private markets, providing examples and suggested best practices at the GP, LP and asset levels. We explore how ESG practices dovetail with Responsible AI and pay close attention to “high-risk sectors.”

Defining AI

AI is commonly thought of as a technology that enables computers and machines to simulate human intelligence and problem-solving capabilities. This definition isn’t wrong per se, but it is lacking. OECD AI, a leading think tank, offers the following definition, which was revised in 2023 to encompass generative AI considerations like autonomy and adaptiveness:

“An AI system is a machine-based system that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments. Different AI systems vary in their levels of autonomy and adaptiveness after deployment.”

This definition introduces some important concepts that lie at the heart of Responsible AI—namely, how AI affects humans, and influences physical and virtual spaces.

Read the full paper here

Responsible Investment

Driving better investment decisions.

How we think

Access our full library of insights on responsible investment.