Three papers in the ALIFE 2019 Proceedings

This year’s artificial life conference (ALIFE 2019) will take place in Newcastle next week.

The conference proceedings have been published by MIT Press under an open access license.

Three of my graduate students will be presenting a part of their thesis research. Here are the titles of their contributions, with links to download the full papers:

From embodied interaction to compositional referential communication: A minimal agent-based model without dedicated communication channels

Jorge I. Campos and Tom Froese

Self-optimization in a Hopfield neural network based on the C. elegans connectome

Alejandro Morales and Tom Froese

Applying Social Network Analysis to Agent-Based Models: A Case Study of Task Allocation in Swarm Robotics Inspired by Ant Foraging Behavior

Georgina Montserrat Reséndiz-Benhumea, Tom Froese, Gabriel Ramos-Fernández, and Sandra E. Smith-Aguilar

Talk on self-optimization in life, mind, and society

Next week Wednesday, March 20, at 5pm I will participate in a discussion of complexity in the sciences, which will take place in the Colegio Nacional of Mexico. The event spans everything from physics to archaeology. I will make some links across disciplines and talk about “Self-optimization in life, mind, and society”.

New paper: Self-Optimization in Continuous-Time Recurrent Neural Networks

We were able to generalize the powerful self-optimization process to continuous-time neural networks, the class of neural networks most used by evolutionary robotics.

Self-Optimization in Continuous-Time Recurrent Neural Networks

Mario Zarco and Tom Froese

A recent advance in complex adaptive systems has revealed a new unsupervised learning technique called self-modeling or self-optimization. Basically, a complex network that can form an associative memory of the state configurations of the attractors on which it converges will optimize its structure: it will spontaneously generalize over these typically suboptimal attractors and thereby also reinforce more optimal attractors—even if these better solutions are normally so hard to find that they have never been previously visited. Ideally, after sufficient self-optimization the most optimal attractor dominates the state space, and the network will converge on it from any initial condition. This technique has been applied to social networks, gene regulatory networks, and neural networks, but its application to less restricted neural controllers, as typically used in evolutionary robotics, has not yet been attempted. Here we show for the first time that the self-optimization process can be implemented in a continuous-time recurrent neural network with asymmetrical connections. We discuss several open challenges that must still be addressed before this technique could be applied in actual robotic scenarios.