The robot (iCub) thinks if it smiles

Researchers from the Italian Institute of Technology conducted an experiment in which participants communicated with the anthropomorphic robot iCub. People tend to attribute robots’ ability to make free decisions if they behave emotionally, and a robot that behaves like a person thinks if it acts like a human. It is unclear if there is a connection between the robot looking into its eyes and thinking.


A study by The American Psychological Association found that when anthropomorphic robots seem to be showing human emotions to a person, they are ready to admit that robots can think and make decisions.
Principal investigator at the Italian Institute of Technology, Agnieszka Wykowska, says that “the relationship between anthropomorphic forms, human-like behaviour and the tendency to attribute autonomous thinking and intentional behaviour to robotics remains to be understood.”


The researchers conducted three experiments involving 119 people to see how people perceived the iCub robot. They also viewed a video of it in action.


Participants completed a questionnaire before and after they interacted with the robot. They were shown photographs of the robot in various situations and asked to determine if the robot’s decisions in each case were purely mechanical or deliberate. Participants viewed three pictures of the robot picking a tool and were asked to decide if the robot “grabbed” the closest object or “was fascinated with that tool.” The photographs do not contain any information about the robot’s intentions.


The researchers controlled iCub’s actions during the first two experiments to make it more emotional. He acted friendly and greeted participants. The robot’s cameras could identify the participants’ faces and maintain eye contact. The robot then showed three short documentary videos to the participants. The robot responded to the video with exclamations and expressions of sadness and happiness.


The researchers also programmed the iCub, so it behaved like a machine while they were watching a video. The robot could not maintain eye contact with its eyes and could only speak in short sentences. All emotions were suppressed from the cutscenes and replaced by “beeps” or repetitive neck, head and torso movements.


Researchers found that those who watched the video with the robot’s “emotional” counterpart were more likely than others to consider its actions intentional. Those who only interacted with the robot were less likely to think so. This proves that robots must exhibit “human” behaviour to be able to think and make decisions.


These results show that people believe artificial intelligence can think independently if it behaves like a human. These findings could influence the future design of social robots, such as social assistance robots.


Discover more from TechResider

Subscribe to get the latest posts sent to your email.