Date of Award

2011-01-01

Degree Name

Master of Science

Department

Computer Engineering

Advisor(s)

Patricia A. Nava

Abstract

Long training times and non-ideal performance have been a big impediment in further continuing the use of Artificial Neural Networks for real world applications. Current research is currently focused on two areas of study that aim to address this problem. The first approach seeks to overcome large training times by devising faster learning algorithms where a set of interconnection weights for which the network produces negligible error takes a less amount of computation to find [Sun98]. The second approach aims to address the impediment by implementing existing training algorithms but on parallel hardware architectures.

While both approaches provide promising advances for future development in neural networks, it is the approach of using parallel implementations that is further considered in this study. The main advantages of focusing on this route are that it can be implemented on already existing training algorithms and, at the same time can be used as a vehicle to study improvements in error and accuracy performance of the trained network. A side byproduct of focusing on this approach is that a framework can be established to provide further speedup for future (faster and more efficient) training algorithms.

This research focuses in the parallel Backpropagation training implementation in which the processing nodes used are interconnected using the star-connected topology and arranged in a HOST-WORKERS manner, while implemented on a 40-node Beowulf cluster. Additionally, four variations of the training algorithm were evaluated across three different benchmark problems and the results were compared using the sequential version against multiple instances of the parallel implementation for an increasing number of processing elements. A decrease in average error was observed, along with an overall decrease in speed up performance as the number of processing elements was increased.

Language

en

Provenance

Received from ProQuest

File Size

117 pages

File Format

application/pdf

Rights Holder

Carlos Beas

Share

COinS