Post Thumb

International Drone Racing Association

Share it

The system could learn to defeat the games, and score higher than humans, but not remember how it had done so.

“Previously, we had a system that could learn to play any game, but it could only learn to play one game,” James Kirkpatrick, a research scientist at DeepMind and the lead author of its new research paper, tells WIRED. “Here we are demonstrating a system that can learn to play several games one after the other”.

The work, published in the Proceedings of the National Academy of Sciences journal, explains how DeepMind’s AI can learn in sequences using supervised learning and reinforcement learning tests.

“That way there is room to learn the new task but the changes we’ve applied do not override what we’ve learned before”. “Previously, DQN had to learn how to play each game individually,” the paper says. Essentially, the deep neural network using the EWC algorithm was able to learn to play one game and then transfer what it had learnt to play a brand new game.

While it is able to learn from its previous experiences and retain the most useful information, it isn’t able to perform as well as a neural network that completes just one game.

read more...

Article originally posted at www.facebook.com

Post Author: Carla Parsons

1 thought on “International Drone Racing Association

    Steven Espinoza

    (October 29, 2017 - 7:44 am)

Leave a Reply

Your email address will not be published. Required fields are marked *