New article: Embodied Dyadic Interaction Increases Complexity of Neural Dynamics

This is the latest installment in my efforts to show that there is nothing mysterious about the possibility that some mental processes are realized by more than one individual.

Embodied Dyadic Interaction Increases Complexity of Neural Dynamics: A Minimal Agent-Based Simulation Model

Madhavun Candadai, Matt Setzler, Eduardo J. Izquierdo and Tom Froese

The concept of social interaction is at the core of embodied and enactive approaches to social cognitive processes, yet scientifically it remains poorly understood. Traditionally, cognitive science had relegated all behavior to being the end result of internal neural activity. However, the role of feedback from the interactions between agent and their environment has become increasingly important to understanding behavior. We focus on the role that social interaction plays in the behavioral and neural activity of the individuals taking part in it. Is social interaction merely a source of complex inputs to the individual, or can social interaction increase the individuals’ own complexity?

Here we provide a proof of concept of the latter possibility by artificially evolving pairs of simulated mobile robots to increase their neural complexity, which consistently gave rise to strategies that take advantage of their capacity for interaction. We found that during social interaction, the neural controllers exhibited dynamics of higher-dimensionality than were possible in social isolation. Moreover, by testing evolved strategies against unresponsive ghost partners, we demonstrated that under some conditions this effect was dependent on mutually responsive co-regulation, rather than on the mere presence of another agent’s behavior as such. Our findings provide an illustration of how social interaction can augment the internal degrees of freedom of individuals who are actively engaged in participation.

Advertisements

New paper: Self-Optimization in Continuous-Time Recurrent Neural Networks

We were able to generalize the powerful self-optimization process to continuous-time neural networks, the class of neural networks most used by evolutionary robotics.

Self-Optimization in Continuous-Time Recurrent Neural Networks

Mario Zarco and Tom Froese

A recent advance in complex adaptive systems has revealed a new unsupervised learning technique called self-modeling or self-optimization. Basically, a complex network that can form an associative memory of the state configurations of the attractors on which it converges will optimize its structure: it will spontaneously generalize over these typically suboptimal attractors and thereby also reinforce more optimal attractors—even if these better solutions are normally so hard to find that they have never been previously visited. Ideally, after sufficient self-optimization the most optimal attractor dominates the state space, and the network will converge on it from any initial condition. This technique has been applied to social networks, gene regulatory networks, and neural networks, but its application to less restricted neural controllers, as typically used in evolutionary robotics, has not yet been attempted. Here we show for the first time that the self-optimization process can be implemented in a continuous-time recurrent neural network with asymmetrical connections. We discuss several open challenges that must still be addressed before this technique could be applied in actual robotic scenarios.