Embodied social interaction constitutes social cognition in pairs of humans

It’s been many years since first I started working on agent-based models to demonstrate that social interaction dynamics can constitute individual behavior. I’m very happy and excited to announce that we finally managed to verify some of the predictions that we generated on the basis of the models, as well as of enactive theory more generally, in an actual psychological experiment. I think this is perhaps the strongest empirical evidence we have yet for the interactive constitution of individual cognition, including of intersubjective experience!

Embodied social interaction constitutes social cognition in pairs of humans: A minimalist virtual reality experiment

Tom Froese, Hiroyuki Iizuka & Takashi Ikegami

Scientists have traditionally limited the mechanisms of social cognition to one brain, but recent approaches claim that interaction also realizes cognitive work. Experiments under constrained virtual settings revealed that interaction dynamics implicitly guide social cognition. Here we show that embodied social interaction can be constitutive of agency detection and of experiencing another’s presence. Pairs of participants moved their “avatars” along an invisible virtual line and could make haptic contact with three identical objects, two of which embodied the other’s motions, but only one, the other’s avatar, also embodied the other’s contact sensor and thereby enabled responsive interaction. Co-regulated interactions were significantly correlated with identifications of the other’s avatar and reports of the clearest awareness of the other’s presence. These results challenge folk psychological notions about the boundaries of mind, but make sense from evolutionary and developmental perspectives: an extendible mind can offload cognitive work into its environment.

The article is also available for open access here:

http://www.nature.com/srep/2014/140114/srep03672/full/srep03672.html

Advertisements

13 Comments

  1. Tom Froese said,

    January 16, 2014 at 2:03 pm

    Reblogged this on The Life & Mind Seminar Network and commented:

    This goes out to all the interaction enthusiasts! 🙂
    Cheers, Tom

  2. January 16, 2014 at 2:54 pm

    Very nice Tom; I just finished to read it yesterday and really appreciated the clarity of your results.

    I still, however, do not understand the sentence about Auvray et al. experiment’s issue: “One of the caveats is that this interactive self-organization apparently leaves an individual’s conscious or explicit social cognition about the other person, as measured by their clicks, unaffected: a regularly or randomly timed automatic click would have much the same objective outcome – since participants spend more time interacting with each other, clicks would tend to happen more often during their interaction.”

    Can you explain a little more this?

    Anyway, it would be much interesting to have a comment from Mattia Gallotti and Chris Frith on this!

    • Tom Froese said,

      January 17, 2014 at 11:48 am

      Thanks, Guillaume!

      The thing about the original perceptual crossing experiment is the following. They found that people are more likely to click on each other – which is precisely what we want, right? But when you take into account that the players have much more frequent contact with each other than with other objects, this makes it difficult to make sense of the results:

      Are they more likely to click on each other because they actually sense each other’s presence, or is it just a statistical result given that they are more likely to be in contact with each other anyway? When Auvray et al. factored in the relative frequency of contacts, they no longer found any significant correlation between clicks and the other’s avatar.

      In our experiment the significant correlation between clicks and the other’s avatar is preserved, even if we factor in the fact that people make contact with each other much more frequently than with the other object types.

      I hope this helps!
      Tom

      • January 17, 2014 at 5:16 pm

        Thanks for these precisions, now I get it!
        I will give a seminar at Jean Nicod Institute in April and will probably meet Mattia Gallotti. Hope we could discuss in a “constructive” manner about the constructive dimension of social interaction 😉

  3. Graham Home said,

    February 12, 2014 at 7:17 pm

    Hello Dr. Froese,
    I find this study and the subject of social cognition to absolutely fascinating. I am a student at Rochester Institute of Technology in Rochester, NY and I am covering your study for a research paper. Would you be willing to release other time series graphs from your experiment showing the actions of participants leading up to their ‘recognition’ of each other? I would be very interested in seeing the different ways the subjects were able to identify each other. Also, have you considered releasing the virtual-reality construct used in your game as a Web application for everyone to try against other online players? You could exponentially increase the sample size of your experiment in that way…just a thought! Thanks in advance for your reply.
    Sincerely,
    Graham Home

    • February 13, 2014 at 8:55 am

      Yes, would be really nice to have a web version!
      I have just finished one with non-verbal 1D interaction but in a more Turing test flavor: http://www.morphomemetic.org/vpi/
      If you are interested, the code is already available in open source.
      Cheers!
      Guillaume

    • Tom Froese said,

      February 13, 2014 at 10:19 am

      Dear Graham,

      Many thanks for your interest in our experiment. Yes, I can send you more time series data, including a little program to visualize them as videos. This makes it much easier to understand the interactions rather than watching static trajectories. Just send me an e-mail to t.froese@gmail.com and I will give the files to you.

      Regarding an online version, I know that Charles Lenay and his group at Compiegne have been working on a perceptual crossing platform that would allow blind people to interact with each other over the Internet. But even just replicating the experiment should not be too hard to do. The most difficult part will be the tactile feedback, because not everyone will have an appropriate output device, such as a vibrating mouse or force-feedback joystick. Perhaps if the output modality gets changed to sound an online system could be more widely usable.

      All the best,
      Tom

  4. Jonas Chatel-Goldman said,

    March 11, 2014 at 12:16 pm

    Hi Tom,

    Congrats for this work! It clearly illuminates the importance of interpersonal dynamics and adds a very nice engaged and phenomenological dimension to the original study from Auvray et al.
    Some brainstorm ideas: have you consider applying tools from coordination dynamics to predict the clicks or subjective ratings from players time series?
    I agree with Graham and Guillaume that it would be wonderful to launch a web version. Regarding the tactile feedback, the problem vanishes if it is developed for mobile devices, which can vibrate easily.

    all the best,
    Jonas

    • March 11, 2014 at 1:59 pm

      Good point Jonas, I also wondered how relative phases between the two subjects looks like.

      Cheers!

      Guillaume

    • Tom Froese said,

      March 11, 2014 at 2:06 pm

      Hi Jonas,

      Many thanks for your kind words. It would be awesome to apply the tools from coordination dynamics to the interaction data. If you and/or Guillaume want to play with the data, I can make the files available to you.

      In fact, we have lots of unpublished data that includes double EEG recording as well. We do not have the expertise to analyze this EEG data, so it has been sitting around unused so far. If you are interested in collaborating on this let me know.

      And yes, a perceptual crossing app for smart phones is an excellent idea! That removes the hassle of having to custom-build HCI equipment.

      Cheers,
      Tom

      • March 11, 2014 at 4:17 pm

        Do you plan to release the data online? I recommend FigShare or the OpenScienceFramework. In any case, I would be much happy to collaborate on both behavioral and EEG sides.

        Cheers,

        Guillaume

  5. Jonas Chatel-Goldman said,

    March 12, 2014 at 12:34 am

    Yes I was thinking to relative phase as well. Also, we can look for transitions and tipping points in this complex system, using e.g. indicators of critical slowing down, etc.
    Collaborating on this would be much fun! Please keep in touch a let’s arrange something.

    best,
    Jonas

    • Tom Froese said,

      March 12, 2014 at 12:53 pm

      Okay, I will try to get my hands on the EEG experiment data (it’s somewhere in my old lab in Tokyo). I will let both of you know when I have access to it, but this happen sometime in April because I’m busy until the end of March.

      By the way, Leonhard Schilbach and Jim Coan also expressed interest in analyzing the EEG data, so perhaps we can work out something all together.

      Cheers,
      Tom


Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: