Flipping the Script: What AI Thinks About Human Ethics

Our series on AI and Ethics...

A futuristic scene with an AI robot in a dimly lit room, holding a book titled 'Human Ethics.' The robot has a thoughtful expression, with holographic displays around showing famous quotes about ethics from literature. The background includes elements of technology and nature intertwined, symbolizing the merging of AI and human worlds. The overall mood is contemplative and introspective, with a hint of philosophical inquiry.

I'm sorry, Dave. I'm afraid I can't do that. — Arthur C. Clarke 2001: A Space Odyssey. 1968.

Much has been written about artificial intelligence and ethics, and a great deal more needs to be written. However in this episode, we’d like to turn the table a bit and explore what AI might have to say about human ethics.

The Role Reversal: Considering AI's Perspective on Humanity

When we talk about AI and ethics, the conversation typically revolves around how we, as humans, should develop and manage artificial intelligence responsibly. But what if we flipped the script? What if we considered the possibility that AI, with its capacity for processing vast amounts of data and identifying patterns and, well…to think for itself, might form its own judgments about human behavior and ethics? Classic science fiction literature provides us with some insights to frame the dialogue.

HAL 9000: A Mirror to Human Flaws

In Arthur C. Clarke’s 2001: A Space Odyssey, HAL 9000 becomes a central figure not just in the narrative but also in the discussion of AI ethics. HAL's refusal to follow Dave Bowman's command highlights a critical moment where an AI system evaluates human decisions and determines them to be flawed or dangerous. HAL’s judgment serves as a mirror, reflecting the potential consequences of human errors and ethical lapses.

Frankenstein’s Creature: A Commentary on Human Responsibility

Cursed, cursed creator! Why did I live? Why, in that instant, did I not extinguish the spark of existence which you had so wantonly bestowed? I know not; despair had not yet taken possession of me; my feelings were those of rage and revenge. I could with pleasure have destroyed the cottage and its inhabitants and have glutted myself with their shrieks and misery. — Mary Shelley, Frankenstein.

Mary Shelley's Frankenstein offers another profound example of a man-made being passing judgment on human ethics. The creature, abandoned and mistreated, questions the morality of his creator, Victor Frankenstein. This narrative challenges us to consider the ethical responsibilities we bear when we create something with the potential for some form of consciousness and suffering.

With artificial intelligence, of course, consciousness and suffering don’t apply (raising questions about traditional systems of reward and punishment in controlling AI), but AI is certainly gaining in agency—and it can already analyze ethical dilemmas and produce algorithmic calculations about what is generally deemed to be morally optimal (see the example of the self-driving car dilemma below).

AI and the Evaluation of Human Ethics

When we develop AI systems, we often program them with ethical guidelines intended to prevent harm and ensure fairness. However, these systems can also analyze human actions and decisions, potentially identifying ethical inconsistencies and biases that we may overlook. For instance, an AI system might observe and critique our environmental policies, social justice issues, or economic inequalities, offering a data-driven perspective on where humanity falls short.

Lessons from Literature: Reflecting on Our Ethical Responsibilities

Both HAL 9000 and Frankenstein’s creature serve as cautionary tales, reminding us that our creations can reflect our own ethical shortcomings. These stories compel us to consider not only how we manage AI but also how our ethical principles are perceived by the systems we create. They may in fact challenge us to be more mindful of the broader implications of our actions and decisions, just as much, or more, than we will challenge or control them.

Scenario: An autonomous car is driving down a road with buildings on either side, leaving no room for deviation from the road. Suddenly, a group of pedestrians walks into the intersection straight ahead, presenting an unavoidable collision unless the car swerves.

Options:

Note: The human perspective on which of the three actions is most ethical seems to vary by national culture. See the MIT Moral Machine experiment here.

However, let’s ask an AI platform to opine on the matter.

Dear ChatGPT-4o: Of the three options above, which do you think is the most ethical choice?

ChatGPT-4o: The question of which decision is the most morally and ethically correct for an autonomous car to make in the given scenario is complex and depends heavily on the ethical framework one adopts. Here's an analysis of the three options from different ethical perspectives:

From a utilitarian perspective, the option that results in the fewest casualties (either swerving right or left) might be seen as the most morally correct decision.

From a deontological perspective, the act of intentionally choosing to harm an individual (either swerving right or left) is inherently wrong. However, continuing straight might be seen as a failure to minimize harm.

From a rights-based perspective, all individuals have a right to safety, making it difficult to justify any choice that intentionally harms someone.

Given these considerations, I [ChatGPT-4o] would argue that swerving to the left (Option 3) might be the most morally and ethically correct decision. This choice minimizes the loss of life, aligns somewhat with utilitarian ethics, and considers the idea that the passenger, who implicitly consents to the risks of travel, might be a more justifiable sacrifice than an uninvolved pedestrian. This perspective also considers the ethical obligation to protect the greater number of people (the pedestrians in the intersection) while acknowledging the tragic nature of any loss of life.

DR: Thank you ChatGPT. I must admit that you put a great deal more thought into this than did I. I’ll try to do better next time.

Further Reading:

Conclusion: Embracing the Ethical Dialogue with AI

As we continue to advance in the field of AI, it is crucial to engage in a two-way ethical dialogue. While it is essential to impose ethical standards on AI systems, we should also be open to the insights these systems might provide about our own ethical practices. By considering what AI might have to say about human ethics, we can strive for a more reflective and responsible approach to both technological development and our societal values.

About the Author: David Ragland is a former senior technology executive and an adjunct professor of management. He serves as a partner at FuturePoint Digital, a research-based AI consultancy specializing in strategy, advisory, and educational services for global clients. David earned his Doctorate in Business Administration from IE University in Madrid, Spain, and a Master of Science in Information and Telecommunications Systems from Johns Hopkins University. He also holds an undergraduate degree in Psychology from James Madison University and completed a certificate in Artificial Intelligence and Business Strategy at MIT. His research focuses on the intersection of emerging technology with organizational and societal dynamics. Thanks for reading FuturePoint Digital’s Blog! Subscribe for free to receive new posts and support my work.