The backpropagation algorithm is a supervised learning method for a MLP (multi-layer perceptron) network with sigmoidal activation units. The goal is to make a good presentation from input data to output data. When a new data is feed to a network, the network provides a good mapping to the output space by using the intrinsic structure of the training set. This implementation allows user to use three different training methods. Although the basic training algorithm is slow compared to other methods it can provide, in some cases, a better representation of the training set.