Extensive evidence shows that action observation can influence action execution, a phenomenon often referred to as visuo-motor interference. Little is known about whether this effect can be modulated by the type of interaction agents are involved in, as different studies show conflicting results. In the present study, we aimed at shedding light on this question by recording and analyzing the kinematic unfolding of reach-to-grasp movements performed in interactive and noninteractive settings. Using a machine learning approach, we investigated whether the extent of visuo-motor interference would be enhanced or reduced in two different joint action settings compared with a noninteractive one. Our results reveal that the detrimental effect of visuo-motor interference is reduced when the action performed by the partner is relevant to achieve a common goal, regardless of whether this goal requires to produce a concrete sensory outcome in the environment (joint outcome condition) or only a joint movement configuration (joint movement condition). These findings support the idea that during joint actions we form dyadic motor plans, in which both our own and our partner’s actions are represented in predictive terms and in light of the common goal to be achieved. The formation of a dyadic motor plan might allow agents to shift from the automatic simulation of an observed action to the active prediction of the consequences of a partner’s action. Overall, our results demonstrate the unavoidable impact of others’ action on our motor behavior in social contexts, and how strongly this effect can be modulated by task interactivity.
Visuo-motor interference is modulated by task interactivity: A kinematic study
Romeo L.;
2023-01-01
Abstract
Extensive evidence shows that action observation can influence action execution, a phenomenon often referred to as visuo-motor interference. Little is known about whether this effect can be modulated by the type of interaction agents are involved in, as different studies show conflicting results. In the present study, we aimed at shedding light on this question by recording and analyzing the kinematic unfolding of reach-to-grasp movements performed in interactive and noninteractive settings. Using a machine learning approach, we investigated whether the extent of visuo-motor interference would be enhanced or reduced in two different joint action settings compared with a noninteractive one. Our results reveal that the detrimental effect of visuo-motor interference is reduced when the action performed by the partner is relevant to achieve a common goal, regardless of whether this goal requires to produce a concrete sensory outcome in the environment (joint outcome condition) or only a joint movement configuration (joint movement condition). These findings support the idea that during joint actions we form dyadic motor plans, in which both our own and our partner’s actions are represented in predictive terms and in light of the common goal to be achieved. The formation of a dyadic motor plan might allow agents to shift from the automatic simulation of an observed action to the active prediction of the consequences of a partner’s action. Overall, our results demonstrate the unavoidable impact of others’ action on our motor behavior in social contexts, and how strongly this effect can be modulated by task interactivity.I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.