Advertisement
Top
image credit: Unsplash

Why We Need Ethics Regulation in Artificial Intelligence

January 6, 2023

Category:

Last year, the member states of UNESCO adopted the Recommendation on the Ethics of Artificial Intelligence, the first global agreement on AI ethics, which aims to set standards in this field. The Recommendation is built around the fundamental idea that artificial intelligence should be human-centered and serve the interest of humanity, not the other way around. It highlights the benefits AI brings to society in many areas and points out the associated risks and challenges of misusing this technology. The goal is to establish policies and regulatory frameworks that ensure fair use of these technologies in accordance with human rights and contribute to the achievement of the Sustainable Development Goals

In this article, we talk about the need for ethical regulation in the field of AI and the main principles governing it.

10 Principles Governing the Regulation of Ethics in AI

The UNESCO Recommendation states that the proposed policies aim to promote trust and confidence at all stages of an AI system’s lifecycle. It is based on four values for achieving ethical AI: 

  • Respecting, protecting, and promoting human rights, fundamental freedoms, and human dignity.
  • Environmental and ecosystem prosperity.
  • Ensuring diversity and inclusion.
  • Living in peaceful, just, and connected societies.

Proportionality and Do No Harm

AI systems must have a legitimate purpose, respect human rights, and be based on rigorous science. The use of AI systems is governed by the principle of Necessity and Proportionality, which means that they shouldn’t be used for social scoring or mass surveillance purposes.

Safety and Security

Damage and vulnerabilities that could lead to an attack, must be avoided, prevented, and eliminated throughout the lifecycle of AI systems to ensure the safety of people, the environment, and ecosystems.

Fairness and Non-Discrimination

AI actors must safeguard fairness and non-discrimination and ensure that the benefits of AI technologies are available to everyone.

Sustainability 

Assessments must be conducted to determine whether AI is consistent with sustainable goals, such as those currently set forth in the United Nations Sustainable Development Goals. This will take into account its impact on human, social, cultural, economic, and environmental entities.

Right to Privacy and Data Protection 

Privacy should be protected throughout the lifecycle of AI systems, and appropriate data protection frameworks have to be established.

Human Oversight and Determination 

Member states must ensure that they can always assign ethical and legal responsibility for AI systems to humans. Furthermore, life and death decisions should not be delegated to AI systems.

Transparency and Explainability

To improve democratic governance, efforts must be made to increase the transparency of AI systems and their ability to explain decision-making processes.

Accountability 

AI actors and UNESCO member states should respect, protect, and promote human rights and fundamental freedoms, and also promote the protection of the environment and ecosystems. Appropriate oversight, impact assessment, auditing, and due diligence mechanisms should be developed to ensure the accountability of AI systems.

Awareness Raising and Education

Raising public awareness and understanding of AI technologies should be supported through open and accessible education, civic engagement, and AI ethics training, so that people can make informed decisions before using them.

Adaptive Governance and Collaboration 

States must be able to regulate data generated on or passing through their territory and take steps for effective data regulation in accordance with international law. Furthermore, for an inclusive approach to AI governance, the participation of various stakeholders throughout the AI system’s lifespan should be encouraged.

Why Is There a Need for Ethics Regulation in AI?

AI is now embedded in many cross-disciplinary applications that we use daily. These include conversational robots, decision support systems, facial recognition, drones, or autonomous cars. Society seems to have reached a point where it is delegating more and more aspects of thinking to technology. The paradox is that the greater the digital capacity of a society, the more vulnerable it becomes. 

As machines play an increasingly important role in decision-making, there are more ethical issues that arise: erroneous decisions generated by AI, unethical responses from conversational agents, discriminatory decisions (“bias”), mass surveillance facilitated by AI, etc.

As much as we want AI to mimic and replicate human behavior, including cognitively, this technology is not yet endowed with human emotions, morality, critical thinking, or the ability to justify a decision. Therefore, industry experts emphasize the importance of developing and maintaining human-centered AI systems that enhance peoples’ capabilities and work solely for the benefit of humanity. This means AI must not make vital decisions (e.g., concerning human lives), that could have unethical consequences. In this context, the need to regulate these issues has arisen—over time, several documents have been proposed to eventually create a framework of unanimously implemented rules.