Neural network can learn faster using a ‘feedback alignment’ algorithm, claims the University of Oxford.
It uses random feedback matrices to process errors update network parameters.
Multi-layer neural networks, inspired by the brain, are used for speech and image recognition within data sets.
Conventionally, these are trained using a ‘training set’ of inputs and expected outputs.
The difference between the expected and actual outputs is fed-back to adjust, and hopefully improve, the connection weightings of each layer.
However, “it would be impossible for the brain to implement the highly complex algorithms currently used to train these deep neural networks on a computer”, said Isis Innovation – the University’s intellectual property licencing company.
Understanding this has led to the feedback alignment algorithm, based on simpler circuitry requirements, and it is said to have had a number of unexpected benefits. For example, networks are trained quicker than through techniques such as ‘back propagation of errors’.
“Feedback alignment is often quicker than existing methods. Novel network dynamics allow learning steps which approximate second order techniques, with no more computation than that required for a first order technique,” said Isis. “The technique is more robust to network initialisations, and is successful even when other algorithms struggle to learn at all. Decoupling the feedback function avoids the central difficulty with training deep neural networks: the ‘vanishing gradient’ problem.”
It is applicable to both feed-forward and recurrent network architectures, on regression and classification problems. Existing neural network tools can be modified to use it.
For hardware-based networks, in digital cameras for example, the Oxford algorithm reduces the error-feedback precision required, said Isis.
The algorithm is the subject of US patent application 61858928.
For more information, contact Isis Innovation.
from News http://ift.tt/1mgN7Q8
via Yuichun
沒有留言:
張貼留言