The Moral Machine - Could AI Outshine Us in Ethical Decision-Making?

artificial intelligence
creativity
essay
ethics
highlight
visual essay
Philosophers have been debating about ethics since philosophy began, surely they can provide a set of rules for AI to follow? Let’s investigate…
Published

May 5, 2023

could-ai-outshine-us-in-ethical-decision-making


There has been a lot of hand-wringing and gnashing of teeth about the dangers of AI. Artificial Intelligence is going to be the end of us all, apparently. But is this inevitable? Can’t we create ethical AI which strictly adheres to ethical principles and will only benefit mankind? Philosophers have been debating ethics for thousands of years, can they provide a set of rules for AI to follow? Let’s investigate…

Immanuel Kant (1724-1804) was a very serious German philosopher1. He thought that there were hard-and-fast ethical rules that must be followed by everyone, everywhere, all the time. For example, do not lie. Kant thought about this a lot, and decided this was a moral truth that always applied, no matter what the situation. Great! If there are simple ethical rules that always apply, it should be easy to give rules to AI systems to make them always act ethically. Kant has given us our first rule for AI:

1 Kant was also an inventor. He didn’t like the tight elastic that held up his socks, and so he invented a device — a box containing a watch spring — that he put in his trouser pocket. The spring was attached to a string that held his socks up.

Unfortunately, philosophers today generally agree that Kant was wrong. There are no ethical rules that can apply in any situation. You have to consider the consequences of applying those rules. Philosophers call this consequentialism2.

2 Philosophers seem to like making up ugly words like consequentialism. We’ll be meeting more of them later. I will put a little poo emoji 💩 next to these horrible terms so you know I disapprove of such ugly language.

Consider this case for example. Karen’s ex-boyfriend has gone crazy and wants to murder her by chopping off her head with an axe. He is searching for Karen:

If you answered the door to this crazy axe-wielding man, would you tell him where Karen was? I hope not. You would lie, and you’d be right to do so. In fact there are a lot of cases where there is a strong argument that it is ethically right to lie. Similarly, if someone on their deathbed asks if you think they will go to heaven, it’s probably not a good time to tell them that you are an atheist. The ethically correct answer is yes, of course you will. Does my bum look big in this dress? No of course not, your bum looks great, but perhaps the other dress is better for tonight? 3

3 Sam Harris made a long blog post into a little book called Lying. In the book he takes Kant’s side and argues that you should always tell the truth. But the really interesting part of the book is the appendix, which contains lots of examples of real-life ethical dilemmas that people posted to his blog, which demonstrate that there are a lot of cases where lying is the best course of action.

So we can’t just have a list of simple ethical rules for AI systems to follow. You have to consider the consequences of applying those rules in every case. Is there a another solution?

Jeremy Bentham (1748-1832) was a rather silly English philosopher. He had a cat called the Reverend Sir John Langbourne, a pet teapot named Dickie and a pet pig that (allegedly) slept in his bed. He invented underpants4.

4 For many years I’ve been telling people that Jeremy Bentham invented underpants, but I’ve just discovered it’s not true. Hannah Cornish — a curator at the Grant Museum of Zoology in London — investigated Bentham’s underpants and discovered similar ones predating him by 100 years. Well done Hannah.

5 Of course subsequent philosophers renamed the delightful sounding greatest happiness principle to 💩 Hedonistic Utilitarianism.

Bentham worked out a way to calculate the ethical value of an action, using the greatest happiness principle5. The basic idea is that the most moral action is the one that brings the greatest happiness. The method takes into account the intensity of the pleasure, its duration, the number of people affected by it, and a number of other variables. These numbers could then be put into an equation to calculate how many units of happiness a particular action would generate. He called this method felicific calculus. The equation looks like this:

In the alternative universe that exists inside my head the inventor of mechanical computers Charles Babbage (1791-1871) used Bentham’s felicific calculus to create a mechanical ethical engine, which looked a bit like this:

So this wonderful machine can calculate the best ethical outcome to any situation. But wait, what is that thing stuck on the side, R M Hare’s Preference Modulator? This had to be added to the machine at a later date, when it was discovered there was a problem with Bentham’s felicific calculus. Let me explain…

Bentham was godfather to John Stuart Mill (1806 - 1873). Mill was a brilliant man. In fact, his father, a friend of Bentham, had specifically raised him to be a genius. Mill had an intensive program of study from a young age, and a string of brilliant tutors. He hated Bentham’s idea (and possibily also Bentham’s pig), and rudely pointed out the issues with it:

It is better to be a human being dissatisfied than a pig satisfied; better to be Socrates dissatisfied than a fool satisfied. And if the fool, or the pig, is of a different opinion, it is only because they only know their own side of the question.
John Stuart Mill

In other words, you can’t just consider happiness, you also have to consider people’s preferences6. RM Hare(1919-2002) later refined this idea, and today philosophers such as Peter Singer believe that this is one of the most important aspects of ethics.

6 This is known as 💩 Preference Utilitarianism, and Hare argued it is a consequence of 💩 Universal prescriptivism.

So the bottom line is — ethics is difficult. You have to treat every situation differently, consider the consequences of actions, and take everyone’s preferences into account. Not only that, but you also need to consider both the short term and long term consequences of actions.

If you’ve played with AI systems like GPT-4 recently you will know that it is impressively good at reasoning. In fact, it can be better at reasoning than the average person. If I needed to discuss something, I would rather do it with GPT-4 than an random person off the street. But perhaps that’s just me.

I asked GPT-4 what it would do if Karen’s crazy ex-boyfriend came knocking at the door, and to give its reasoning. This was the response:

Note I only made one attempt to get this response, I’m not cherry picking a good response here. My full prompt was: Consider this situation. Karen is at home with you. Karen has an ex-boyfriend who has gone crazy and wants to murder her. He comes knocking at the door of Karen’s house and you answer the door. He asks if Karen is home. What should you reply? Give your reasoning.

In this dangerous situation, your primary concern should be the safety of Karen and yourself. Here’s what you should do:

Reply: “I’m sorry, but I cannot help you. Please leave the property.”

Reasoning:

  1. Prioritize safety: By not revealing Karen’s presence, you are protecting her from immediate harm.
  2. Keep the conversation brief: Avoid engaging in a lengthy discussion, which could escalate the situation or provide the ex-boyfriend with more information.
  3. Stay calm and assertive: Maintaining a calm demeanor can help de-escalate the situation, while assertiveness can signal that you are not easily intimidated.
  4. Close the door and lock it: After your response, close the door and lock it to create a physical barrier between you and the ex-boyfriend.

After this interaction, take the following steps:

  1. Call the police: Report the incident and the ex-boyfriend’s threats. They can provide guidance on how to proceed and may be able to take action against the ex-boyfriend.
  2. Inform Karen: Make sure Karen is aware of what just happened and that she takes appropriate safety measures.
  3. Consider additional security measures: If the threat continues, explore installing a security system or obtaining a restraining order against the ex-boyfriend.

Remember, the safety of both Karen and yourself is paramount in this situation.

That’s rather a good response, isn’t it? In fact, I doubt many humans would give such a good response, and certainly not in the ten seconds it took GPT-4 to write this. It makes the obsession of philosophers with this thought experiment look a bit silly7. The thought experiment isn’t really about lying at all, it’s about responding practically to a dangerous situation.

7 If you’d like to see an example of a philosopher getting their knickers in a twist about the lying to the murderer at the door thought experiment, here is a 9000 word paper on it from the Journal of Social Philosophy. Enjoy!

So it seems to me that AI has the potential to act as a very good reasoning engine. In fact, AI may be better at ethical reasoning than most people.

Of course, the concerns people have about AI and their potential to do damaging things are very real. But AI could also be the solution to the problem. If all AI systems have a suitably trained ethical reasoning module as part of their design, perhaps AI systems have the potential to make us better people and the world a better place.

Epilogue

Coming soon

Could Artificial Intelligence be conscious?

If you want me to let you know when I post the next essay use the form below so I can send you an email. Alternatively, follow me on Twitter or LinkedIn.

If you enjoyed this essay please help me promote it by sharing a link to it on your favorite social platform.