Förderjahr 2021 / Stipendien Call #16 / ProjektID: 5843 / Projekt: Impact of Artificial Intelligence on Women’s Human Rights
Algorithm Watch launched an AI Ethics Guidelines Global Inventory with a compilation of frameworks and guidelines that seek to set out principles for the development and implementation of ethical artificial intelligence. In total, more than 160 guidelines exist. The majority of them have been written by governments, the private sector and civil society. Most of them are recommendations (115), some have voluntary commitments (44) and only a few include binding agreements (8). Also by regions, a certain hub can be recognised, with most guidelines published within Northern America and Western Europe. 
The Berkman Klein Center for Internet & Society gives a very comprehensive and good overview of the mentioned AI ethics principles in the different guidelines with eight main topics that can be found in the majority of the documents.
- Privacy: Artificial intelligence systems influence privacy in a lot of ways through the vast amount of data, the system is using, f.e. in surveillance, advertising, and others. Not only in the implementation of AI systems, but also in the development and training, the principle of privacy is of importance.
- Accountability: This principle refers to who will be accountable for decisions made by artificial intelligence systems as well as the enormous scale of the technology’s impact on the social and natural world. The principle can be mapped in the different stages of the lifecycle of artificial intelligence, the design, monitoring and redress.
- Safety and Security: Safety addresses the internal functioning of AI systems and the avoidance of unintended harm. Security refers to external threats to the system.
- Transparency and Explainability: Transparency refers to the idea, that it should be possible to oversee the operations of AI systems. Explainability means the possibility to translate technical concepts into comprehensible formats.
- Fairness and Non-discrimination: Non-discrimination and the prevention of bias mean that bias in the AI system through training data, technical design choices and technology’s deployment should be mitigated to prevent discrimination. Fairness is defined by the equitable and impartial treatment of persons by AI systems; equality goes further than fairness by including the idea that whether people are similarly situated or not, they deserve the same opportunities.
- Human Control of Technology: Human review of automated decisions includes the idea that people affected by AI systems should be able to request and receive human review upon AI decisions, decide to not be subject to AI systems and intervene in AI actions.
- Professional Responsibility: This theme reflects the responsibility of individuals and teams designing, developing or deploying AI-based systems.
- Promotion of Human Values: The promotion of human values is considered a key element of ethical and rights-respecting artificial intelligence. AI systems should correspond and be strongly influenced by social norms.
The authors also conducted research on whether human rights concerns have been implemented in ethical guidelines. 23 of the documents, (64%) made such a reference, and only five documents (14%) had explicitly employed a human rights framework. The private sector and civil society have been the ones most likely referring to human rights.
The majority of those documents include mostly vague formulations with in most cases no enforcement mechanisms at all. Therefore, they are often not considered suitable tools against harmful uses of AI systems. Especially in the context of human rights concerns, different approaches, mainly with legal grounds, are necessary. That is going to be discussed in more detail in the following blog articles.
 Fjeld, Jessica, Nele Achten, Hannah Hilligoss, Adam Nagy, and Madhulika Srikumar. ‘Principled
Artificial Intelligence: Mapping Consensus in Ethical and Rights-based Approaches to Principles
for AI.’ Berkman Klein Center for Internet & Society, 20 (2020) <nrs.harvard.edu/urn-3:HUL.InstRepos:42160420>.
 Ibid, 21.
 Ibid, 28f.
 Ibid, 27.
 Ibid, 42f.
 Ibid, 48f.
 Ibid, 53f.
 Ibid, 56.
 Ibid, 60.
 Ibid, 64.
 Algorithm Watch, ‘In the realm of paper tigers – exploring the failing of AI ethics guidelines’, (28 April 2020),<algorithmwatch.org/en/ai-ethics-guidelines-inventory-upgrade-2020/>.