Evaluating AI Governance

WHITEPAPER

Ravit Dotan, Gil Rosenthal, Tess Buckley, Josh Scarpino, Luke Patterson, Thorin Bristow

12/2/20232 min read

In this research study, we delve into the governance practices of companies in relation to their AI systems. Our aim is to shed light on the benchmarks and trends that exist in this domain, particularly focusing on the correlation between signals of good governance and implementation activities. By analyzing publicly available information, we hope to empower individuals and groups who need to evaluate companies but have limited access to internal data, such as consumers, investors, and procurement teams.

For this study, we utilized data collected by EthicsGrade, a renowned organization specializing in evaluating ethical practices of companies. The dataset included information from 254 companies, providing us with a substantial sample size for analysis. To assess the governance practices of these companies, we employed the NIST AI Risk Management Framework (NIST AI RMF), a widely recognized framework for managing risks associated with AI systems.

Our analysis focused on identifying potential signals of good governance and their relationship with implementation activities aimed at measuring and minimizing risks. We sought to uncover whether companies with established AI ethics principles were more likely to engage in proactive risk management practices.

Findings and Insights

The results of our study revealed interesting insights into the governance of AI systems in companies. We found that a significant number of companies had publicly available AI ethics principles, demonstrating a growing awareness of the importance of ethical considerations in AI development and deployment.

Furthermore, companies with well-defined AI ethics principles were more likely to engage in implementation activities aimed at measuring and minimizing risks associated with their AI systems. These activities included robust risk assessment procedures, ongoing monitoring and evaluation, and the establishment of clear accountability mechanisms.

By providing access to this information, we empower consumers, investors, and procurement teams to make more informed decisions. Consumers can choose products and services from companies that prioritize ethical AI practices, while investors can identify companies that are proactive in managing AI-related risks.

Moreover, our study highlights the importance of transparency and accountability in the governance of AI systems. Companies that openly share their AI ethics principles and engage in proactive risk management activities demonstrate a commitment to responsible AI development and deployment.

Through our research study, we have gained valuable insights into how companies govern their AI systems based on publicly available information. By analyzing the relationship between signals of good governance and implementation activities, we have provided a guide for how companies' can assess their ethical practices.

Ultimately, our goal is to promote transparency, accountability, and responsible AI practices, benefiting not only the companies themselves but also the wider society that interacts with AI systems on a daily basis.