search Inloggen search Registreren

Jouw profiel

Registreren Inloggen

Artikel

07
February

Daniel van der Kolk

February 07, 2025

3 views

1.2 What is A.I. ethics?

Before looking at AI ethics, we need to set out what ethics means in the first place.

Ethics seeks to answer questions like “what is good or bad”, “what is right or what is wrong”, or “what is justice, well-being or equality”. As a discipline, ethics involves systematizing, defending, and recommending concepts of right and wrong conduct by using conceptual analysis, thought experiments, and argumentation.

The three subfields of ethics

  •  Meta-ethics studies the meaning of ethical concepts, the existence of ethical entities (ontology) and the possibility of ethical knowledge (epistemology). 
  • Normative ethics concerns the practical means of determining a moral (or ethically correct) course of action. 
  • Applied ethics concerns what a moral agent (defined as someone who can judge what is right and wrong and be held accountable) is obligated or permitted to do in a specific situation or a particular domain of action.

AI ethics is a subfield of applied ethics. Nowadays, AI ethics is considered part of the ethics of technology specific to robots and other artificially intelligent entities. It concerns the questions of how developers, manufacturers, authorities and operators should behave in order to minimize the ethical risks that can arise from AI in society, either from design, inappropriate application, or intentional misuse of the technology.

These concerns can be divided into three time frames as follows:

  • immediate, here-and-now questions about, for instance, security, privacy or transparency in AI systems
  • medium-term concerns about, for instance, the impact of AI on the military use, medical care, or justice and educational systems

longer-term concerns about the fundamental ethical goals of developing and implementing AI in society

What's your reaction ?

Comments (0)

No reviews found