A new learning algorithm for pattern association in a recurrent neural network is being developed. Unlike the conventional model in which the memory is stored in an attractive fixed point at discrete location in state space, the dynamics of the new learning algorithm represents memory as a line of attraction in each individual feature over a single highly scalable Neural Network(NN). Most conventional NN based approaches utilized an individual neural network for each group to be classified. This introduces problems when trying to scale to classification systems with large databases.

System diagram

To over come this problem a new two phase network is under development. The first network is a smoothing network which seeks to pull any item to be classified towards the a mean point associated with the group it most resembles. A Hopfield NN serves as the basis structure and has been modified to the structure show below. The second phase is a “Winner Take All” Neural Network which chooses the group closest to the output of the smoothing network. If the smoothing network sufficiently performs its job the output should be very closed that of the mean object in the group.

The smoothing network is fully connected with a self connection of weight one. Values from all other nodes in the network are sent through and array of nonlinear polynomial weights. For each feature, weights are chosen to closely approximate a piecewise function based around the mean for each grouping.

Desired Attractor Function

This approach has numerous possible applications. Any classification problem which can be broken down to into features and has the possibility of interfacing with a large database is a possible candidate. Some possible applications include face recognition, fingerprint recognition, emotion detection, vehicle identification, color classification, voice recognition, etc.

VIPS – Vision Lab Project Demonstrations