How similar are robots and humans? How do they—and more specifically our interaction with them—influence our ability to perceive, think, and act?
A new study done in collaboration between Brown’s Cognitive and Psychological Sciences Department and the Italian Institute of Technology, works to find answers to these questions.
Initially conceptualized by Joo-Hyun Song, associate professor of the CoPsy department, and Alessandra Sciutti, head of the CONTACT (COgNiTive Architecture for Collaborative Technologies) of the Italian Institute of Technology, this project falls under a grant encouraging international collaboration between the United States and Europe. Their key goal: determine if a robot’s hand can bias a human’s visual attention after human-robot collaboration, the way another human’s hand would.
Humans prioritize collaboration in work, even though their skills may not be matched, and coordination. Children learn from a young age to pass the ball in sports and take turns. This behavior has been ingrained in humans but is yet to be applied to robots on a large scale.
Led by Giulia Scorza Azzarà, a graduate student of Dr. Sciutti, this project merges the studies of perception, cognition, and robotics, with human interaction at its core. It centers around two key concepts: the near-hand effect and body schema.
“The human's hand is very special,” Song explained. “So when [there is an object] nearby your hand, there is [an] attentional priority. You can shift your attention more efficiently towards your hand.” Referred to as the near-hand effect, a visual phenomenon is a response of the body to objects near our own hand.
The body schema is the model humans make subconsciously for their spatial representation of bodily properties and connections. It is informed by surrounding people and is constantly adapting to fit changing environments.
The study—which included fifty two participants—involved a “joint-sawing” initiative, where human participants worked with robots to cut through soap. A human and a robot each held a side of a wire and collaborated to make an indentation through the soap.
The closer the robot’s competence was to a “human” level, the more the human encoded it into their body schema and the stronger the human’s visual perception was of the hand after the activity. This comes in direct contrast to inanimate, unrelated objects; there is no shift of attention toward the object because there is no reason for a human to encode it.
As Song synthesized, “If a robot puts its hand [out], do you think it can attract your attention to the robot's hand? The answer is no. A robot’s hand is a robot's hands, not your hand, so it does not attract your attention. But if you're working together with the robot, just cutting the soup for three, four minutes together, then the robot's hands become like your hands [and] attract your attention to its hand. Your attention will prioritize [the] robot's hands.”
To synthesize, the human-robot interaction led to the replication of the near-hand effect, Song shared, which changes the properties of human perception. This runs in contrast to much of the existing research on robots that center on improving pure algorithms, Song explained, but instead focuses on connecting to and interacting with humans in safe, sensitive, and “very cute” ways.
For example, Song said, “Knowing that once a robot and human collaborate, human’s attention properties change [could] help the robot to be more dynamically adjusted in interact[ing] with humans.”
Especially as there is more research and frequency of human-robot interaction, Song said, this can also play a role in understanding the effect of robots on humans and their body schema.
This research holds implications for rehabilitation efforts, robotic development, virtual realities, performance evaluations, and more.
This work is supported by the National Science Award# NSF BCS 2043328 and by the ERC STG wHiSPER (Investigating Human Shared Perception with Robots) G.A: 804388.