Surveys with the 10 largest venture capital funds and two largest start-up accelerators investing in Generative AI companies revealed hardly any were taking steps to safeguard human rights.
Leading venture capital (VC) firms are failing in their responsibility to respect human rights, especially in relation to new Generative AI technologies, warned Amnesty International USA (AIUSA) and the Business & Human Rights Resource Centre in research released today.
“Generative AI is poised to become a transformative technology that could potentially touch everything in our lives. While this emerging technology presents new opportunities, it also poses incredible risks, which, if left unchecked, could undermine our human rights.”
Michael Kleinman, Director of AIUSA’s Silicon Valley Initiative
Leading VC firms have refused to implement basic human rights due diligence processes to ensure the companies and technologies they fund are rights-respecting, as mandated by the UN Guiding Principles on Business and Human Rights (UNGPs). This is particularly concerning given the potentially transformative impacts Generative AI technologies could have on our economies, politics and societies.
Michael Kleinman, Director of AIUSA’s Silicon Valley Initiative, said “Generative AI is poised to become a transformative technology that could potentially touch everything in our lives. While this emerging technology presents new opportunities, it also poses incredible risks, which, if left unchecked, could undermine our human rights. Venture capital is investing heavily in this field, and we need to ensure that this money is being deployed in a responsible, rights-respecting way.”
Late Friday 9 December, EU negotiators reached political agreement on the AI Act, paving the way for legal oversight of the technology. The law is considered the world’s most comprehensive on AI so far and will affect companies globally – meaning venture capital firms need to rapidly reconsider their approach. High-risk AI systems, spanning various sectors, must undergo mandatory fundamental rights impact assessments. The European Parliament stated that algorithms having “significant potential harm to health, safety, fundamental rights, environment, democracy and the rule of law” are considered high-risk, including AI systems that can influence election outcomes and voter behaviour. The Act also grants citizens the right to file complaints and receive explanations for AI-powered decisions that have impacted their rights.
Meredith Veit, Tech & Human Rights Researcher, Business & Human Rights Resource Centre, said “The fundamental rights impact assessment obligation within the new EU AI Act is very welcome, particularly considering its impact on the deployment of undercooked generative AI systems, but with the finer details yet to be finalised it is essential that a human rights based approach shine through in the more specific requirements of the regulation – for both public and private actors. That way investors can make informed decisions considering salient human rights and material risks. And while the EU is advancing important mandatory corporate due diligence legislation in the form of the Corporate Sustainability Due Diligence Directive (CSDDD), which we hope can fill some of the AI Act’s loopholes, it cannot be relied upon to hold all actors within the tech ecosystem to account. Startups developing potentially harmful AI systems, for example, need to be scrutinised through the EU AI Act, since they are not within the scope of the CSDDD.”
Our research
To assess the extent to which leading VC firms conduct human rights due diligence on their investments in companies developing Generative AI, Amnesty International USA and the Business & Human Rights Resource Centre surveyed the 10 largest venture capital funds that invested in Generative AI companies, and the two largest start-up accelerators most actively investing in Generative AI.
The VC firms and start-up accelerators surveyed, all based in the US, were Insight Partners, Tiger Global Management, Sequoia Capital, Andreessen Horowitz, Lightspeed Venture Partners, New Enterprise Associates, Bessemer Venture Partners, General Catalyst Partners, Founders Fund, Technology Crossover Ventures, Techstars and Y Combinator.
This analysis revealed that the majority of leading VC firms and start-up accelerators are ignoring their responsibility to respect human rights when investing in Generative AI start-ups:
- Only three out of the 12 firms mention a public commitment to considering responsible technology in their investments;
- Only one out of the 12 firms mentions an explicit commitment to human rights;
- Only one out of the 12 firms states it conducts due diligence for human rights-related issues when deciding to invest in companies; and
- Only one of the 12 firms currently supports its portfolio companies on responsible technology issues.
The report calls for VC firms to adhere to the UNGPs, which stipulate that both investors and investee companies must take proactive and ongoing steps to identify and respond to Generative AI’s potential or actual human rights impacts. This entails undertaking human rights due diligence to identify, prevent, mitigate and account for how they address their human rights impacts.
Kleinman added, “Generative AI has the potential to be beneficial, but it can also facilitate physical harm, psychological harm, reputational harm and social stigmatisation, economic instability, loss of autonomy or opportunities, and further entrench systemic discrimination to individuals and communities. This especially applies to Generative AI’s use in high-risk contexts such as conflict zones, border crossings, or when imposed on vulnerable persons. In the current global environment the risks couldn’t be more critical.
“Venture capital firms have an urgent responsibility to take proactive and ongoing steps to identify and respond to Generative AI’s potential or actual human rights impacts.”
Veit concluded, “It is, of course, possible to see the great potential of new technologies when they are designed using a human-centric approach. Unfortunately, the story of Generative AI thus far has largely been one of maximising profits at the expense of people, especially marginalised groups. But it isn’t too late for investors, companies, governments and rights-holders to take back control over how we want this technology to be designed, developed and deployed. There are certain decisions that we should not allow Generative AI to make for us.”