Mihai's Weblog

February 8, 2010

Backpropagation algorithm

Filed under: Uncategorized — Mihai Vărzaru @ 2:38 pm

Backpropagation is an algorithm used to teach feed forward artificial neural networks. It works by providing a set of input data and ideal output data to the network, calculating the actual outputs and backpropagating the calculated error (the difference between the ideal and actual outputs) using gradient descent. This is useful for learning specific patterns of input and output data in order to be able to reproduce them afterwards even from slightly different or incomplete input data. Neural networks are able to learn any function that applies to input data by doing a generalization of the patterns they are trained with.

In this post I will start by explaining what feed forward artificial neural networks are and afterwards I will explain the backpropagation algorithm used to teach them. In the end I will provide a sample implementation. This post expects some knowledge of math and computer programming from the reader.

Artificial neural networks

Artificial neural networks try to simulate the basic functioning of natural neural networks. The human brain has billions of neurons that work together as one big network. Each natural neuron has stronger or weaker connections with other neurons in the brain. It is believed that in these connections (especially how strong they are) is most of the information that the brain retains.

An artificial neural network retains this basic aspect of natural neural networks and uses a set of real value weights to remember how strong the connections between its neurons are. Each neuron has a number of input connections from other neurons, a number of output connections to other neurons and an activation function that is calculated on the sum of inputs and provides the output value of the neuron.

Feed forward neural networks don’t have any cycles in their neuron connections network. This implies that the outputs connections of a neuron are different from the input ones. In this kind of network the neurons are typically arranged in layers.

The feed forward neural network above has 6 neurons arranged in 3 layers. N1, N2 and N3 are called input neurons as they receive data directly from the user, not other neurons. For input neurons the activation function is not used. N4 and N5 are part of what is called a hidden layer of neurons. They are hidden as the user can’t set them a value or get their value directly, they only communicate with other neurons. N6 is an output neuron. From the output neurons the users get the result of the neural network processing. Wij are the weights of the connections between the neurons (the value of how strong the connections are).

Hidden and output neurons apply an activation function to their sum of input data. The sum of input data is the sum of the products of the input neurons output values and the weights of the input connections from those neurons.

, where Ni are the input neurons to neuron Nj

For the activation function F(x) we have the output of the neuron Nj:

In order to be able to learn any pattern we need a non linear activation function. The most used activation function is the sigmoid function:

Because the sigmoid function takes values in the (0, 1) interval you will have to scale your output values to and from that interval.

In our example above we have:

N1, N2 and N3 as input values.
N4 = F(w14*N1 + w24*N2 + w34*N3)
N5 = F(w15*N1 + w25*N2 + w35*N3)
Output = N6 = F(w46*N4 + w56*N5)

Backpropagation

The purpose of learning is to determine the weights Wij that allow us to reproduce the provided patterns of inputs and outputs (function of inputs). Once the network is trained we can use it to get the expected outputs with incomplete or slightly different data. Basically, it learns a function of arbitrary complexity from examples. The complexity of the function that can be learned depends on the number of hidden neurons.

Before starting the backpropagation learning iterations you will have to initialize the weights with random values, typically in the interval (-1, 1).

A backpropagation step for a specific input pattern and ideal output starts by calculating the error at the output neurons. This error is the difference between the provided ideal output and the calculated actual output multiplied with the activation function derivate on that output point. For the sigmoid function the derivate is F(x) = F(x)*(1 – F(x).

OutputErrorj = (IdealOutputj – Outputj) * F(Outputj)

In our example, using the sigmoid as an activation function:

N6_Error = (N6_Ideal – N6) * N6 * (1-N6)

After we have the error for the output layer we calculate an error for each neuron in the hidden layers, going backwards, layer by layer. The error for a neuron in a hidden layer is the sum of the products between the errors of the neurons in the next layer and the weights of the connections to those neurons, multiplied by the derivate of the activation function.

HiddenErrori = ∑(OutputErrorj * wij) * F(HiddenOutputi)

In our example:

N4_Error = ( N6_Error * w46 ) * N4 * (1 – N4)
N5_Error = ( N6_Error * w56 ) * N5 * (1 – N5)

We will use those errors to calculate the variation of the weights as a result of the current input pattern and ideal outputs. The variation (delta) of a weight is the product of the input neuron output value with the error of the output neuron for that connection. Please note that I use input neuron and output neuron for a connection as a separate term from the network input neurons and output neurons.

∆wij = Outputi * Errorj

In our example:

∆w46 = N4 * N6_Error
∆w56 = N5 * N6_Error

∆w14 = N1 * N4_Error
∆w24 = N2 * N4_Error
∆w34 = N3 * N4_Error
∆w15 = N1 * N5_Error
∆w25 = N2 * N5_Error
∆w35 = N3 * N5_Error

This process is repeated for all input patterns and the variations (deltas) are accumulated. At the end of a learning iteration we change the actual weights with the accumulated deltas for all the training patterns multiplied with a learning rate (a number typically between 0 and 1 which states how fast a network converges to a result). A neural network typically needs at least a few hundred iterations in order to learn a set of patterns.

∆wij_Final = ∑∆wij_Inputk
wij = wij + ( ∆wij_Final * LearningRate )

In our example, considering 2 input patterns and a learning rate of 0.3 we have for example:

∆w46_Final = ∆w46_Input1 + ∆w46_Input2
New w46 = w46 + 0.3 * ∆w46_Final

Resuming, in order to teach a network using backpropagation, we do the following steps:

– Initialize weights with random values
– For a specified number of training iterations do:

  1. For each input and ideal (expected) output pattern
    1. Calculate the actual output from the input
    2. Calculate output neurons error
    3. Calculate hidden neurons error
    4. Calculate weights variations (delta): ∆wij
    5. Add the weights variations to the accumulated delta
  2. Learn by using the accumulated deltas and adding them to the weights

I’ve made a C# implementation of a feed forward neural network with just one hidden layer that uses backpropagation: NeuralNet.cs.

I also made an example project (using Visual C# Express Edition) where I test the neural network and backpropagation on simple digit recognition: nnet.zip

December 17, 2009

GSeptica card game

Filed under: Uncategorized — Mihai Vărzaru @ 4:47 pm

Just created a new game. It is the GTK implementation of a popular card game in Romania, called “Septica”. It has a simple and intuitive interface (mostly, just drag cards down on the deck area). The game itself is simple to play and you can find the game rules on the website or in the game help menu.

Take a look:  http://gseptica.sourceforge.net

Next Page »

Blog at WordPress.com.

Follow

Get every new post delivered to your Inbox.