Theory of Mind or ToM was always a topic studied by psychological researchers to understand how the infants’ cognitive process is developed. There were first mentioned at the end of the seventies Premack and Woodruff (1978), who referred to ToM as the ability to imputer mental states to self and others, including desires, knowledge, beliefs, and intentions, in order to predict behaviours. With social robots increasingly present in our lives, we tend to have a technology that personalises its behaviour to different users according to their preferences, needs, and personalities. Can we use ToM to build a cognitive model for social robots in order to have better interactions with humans?
In doing so, we need to evaluate the level of ToM and its impact when using a social robot. A popular measure for ToM’s abilities is called the “false belief understanding” or how a person can understand that other people can show false belief in a specific situation. For example, if a person put an object, such as a chocolate box, in one location, leaves the room and during his absence, the chocolate box is put into another location. In this context, the question to be asked is where the person will look when coming back into the room to get the chocolate? Will he look to the first location or the second location? According to the answer, we can assess a part of the ToM’s capacity.
To implement this level of cognitive understanding into a robot, it exist various state-of-the-art methods employing different autonomous techniques, e.g. probabilistic methods with Bayesian Networks or data-driven models with neural networks. In the situation described above, the model should be able to track the people’s belief over time and predict their intentions when requiring, e.g. to help someone who shows a false belief.