Biological neural networks have many properties that are attractive for problems in a variety of areas: massive parallelism, fault tolerance, adaptability, generalization, and self learning. Biological neural networks have been the inspiration for artificial neural networks (ANNs) which are based loosely on their biological counterparts. ANNs have been successfully applied to a number of problems in a wide range of fields such as stock-market prediction, optimization problems, image recognition, and control. ANN research started in the 1940s and in the 1960s Rosenblatt published his perceptron convergence theorem. The original perceptron consisted of a number of neurons arranged in a single layer. Since it had only a single layer, the perceptron was not able to map non-contiguous groups into a single category and hence could not solve the XOR problem. It has been shown that a multi-layer perceptron with 3 layers can solve any mapping problem. In 1986, the backpropagation was introduced as a learning algorithm for a multi-layer perceptron. This method uses gradient descent to adjust the weights of the network and typically requires a large number of iterations of the training set.
An ANN can be implemented in hardware using custom or commercially-available neural-network chips or it can be simulated on general purpose computers. Simulation on general-purpose computers offers the greatest flexibility and is often the most practical approach since the hardware is usually available; however, it has the drawback that training a neural network in this environment is often prohibitively slow. The training time may be reduced by running parts of the simulation in parallel on a parallel machine. There are a number of methods used to partition the simulation on parallel machines. The simulation of an ANN can be thought of as consisting of several nested loops. This paper describes a classification method based on which of these loops are executed in parallel and how a number of techniques in the literature fit into the classification. The examples range from simulations using massively-parallel computers such as the TMC CM-2 to simulations on LAN-connected workstations. This paper describes new techniques for efficient simulation of ANNs on clusters of high-performance workstations on an ATM network.