Jump to content
Journal

How teach morals to robots?

It is the question asked by Montréal philosopher Martin Gibert in his book Faire la morale aux robots [Teaching morals to robots] (2020).

One of the greatest challenges facing the “artificial moral agents” (AMA) is the one of moral perception: how to perceive what is good and what is bad?

Many philosophers have shown distrust over emotions, preferring cold rationality. Awaken from his dogmatic slumber1, the professor underlines the relevance of the “emotional turn” in philosophy:

Yet when we look at human psychology, many emotions such as admiration, shame, pity, guilt, anger or disgust undeniably have a moral dimension. Feeling contempt towards someone is condemning what she is or what she does. Likewise, empathy is an emotional mode allowing us to perceive what others feel, a valuable information in any moral decision.

Christine Tappolet, cited by Gibert as an «eminent representative» of the discipline, treats emotions as «perceptions of value.»

The keystone of moral action in artificial intelligence may rely on empathy, emotional appanage of humans solely—for now?

Martin Gibert, <em>Faire la morale aux robots</em>, Atelier 10, 2020.

Martin Gibert, <em>Faire la morale aux robots</em>, Atelier 10, 2020.

Illustration. <em>Faire la morale aux robots</em>, Atelier 10, 2020.

Illustration. <em>Faire la morale aux robots</em>, Atelier 10, 2020.


  1. Allusion to Kant, who in his Critique of Pure Reason, thanks Hume for awakening him of his “dogmatic slumber.” ↩︎