Self-modeling in Hopfield Neural Networks with Continuous Activation Function

The unsupervised learning technique our group has been working on has been extended to a more general class of artificial neural network.

4E Cognition Group

Finally a large part of Mario’s thesis on unsupervised learning in artificial neural networks has been published and is available open access:

Self-modeling in Hopfield Neural Networks with Continuous Activation Function

Mario Zarco and Tom Froese

Hopfield networks can exhibit many different attractors of which most are local optima. It has been demonstrated that combining states randomization and Hebbian learning enlarges the basin of attraction of globally optimal attractors. The procedure is called self-modeling and it has been applied in symmetric Hopfield networks with discrete states and without self-recurrent connections. We are interested in knowing which topological constraints can be relaxed. So, the self-modeling process is tested in asymmetric Hopfield networks with continuous states and self-recurrent connections. The best results are obtained in networks with modular structure.

View original post

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

%d bloggers like this: