Post Thumb

Reading a neural network’s mind

Share it

Neural networks, which learn to perform computational tasks by analyzing huge sets of training data, have been responsible for the most impressive recent advances in artificial intelligence, including speech-recognition and automatic-translation systems.

In several recent papers, researchers from MIT’s Computer Science and Artificial Intelligence Laboratory and the Qatar Computing Research Institute have used a recently developed interpretive technique, which had been applied in other areas, to analyze neural networks trained to do machine translation and speech recognition.

The improvement is modest, but it points toward the possibility that analysis of neural networks could help improve the accuracy of artificial intelligence systems.

The MIT and QCRI researchers’ technique consists of taking a trained network and using the output of each of its layers, in response to individual training examples, to train another neural network to perform a particular task.

In the case of the speech recognition network, Belinkov and Glass used individual layers’ outputs to train a system to identify “Phones,” distinct phonetic units particular to a spoken language.

The best-performing machine-translation networks use so-called encoding-decoding models, so the MIT and QCRI researchers’ network uses it as well.

That’s not an overwhelming improvement, but it’s an indication that looking under the hood of neural networks could be more than an academic exercise.

read more...

Article originally posted at news.mit.edu

Post Author: Carla Parsons

Leave a Reply

Your email address will not be published. Required fields are marked *