Analysis of a parallelized neural network training program implemented using MPI and RPCs
Parallel computing is a programming paradigm that has been very useful to the scientific community, as it has allowed computation times to be greatly reduced for programs that would normally take an extraordinarily long time to execute on a single processor. As with any programming paradigm there exists multiple ways in which to implement a program using this paradigm. In recent years Message Passing Interface (MPI) has become the de facto standard for implementing parallel programming. ^ This thesis aimed to explore the difference in execution times between two parallel programs; one implemented using MPI and the other implemented using remote procedure calls. Both of these programs are based on the Manager-Worker paradigm, and solve the same problem, which is the training of an artificial neural network (ANN) using back-propagation. This artificial neural network training program was chosen because it is computationally intensive, and greatly benefits by being parallelized. A large drop in execution times can easily be seen as the number of compute nodes on which it is executed is increased. ^ This thesis showed that both versions of the neural network training program greatly reduced execution time, and that the difference between the two paradigms (MPI and RPCs) in terms of execution time was really negligible. The research performed showed that although MPI showed a small advantage in execution time, it is by no means superior or advantageous to using RPCs when used in solving embarrassingly parallel problems.^
Cordova, Hector, "Analysis of a parallelized neural network training program implemented using MPI and RPCs" (2008). ETD Collection for University of Texas, El Paso. AAI1453806.