vendredi 6 mars 2015

AI playing video games

http://ift.tt/1wMV4lW



Deep Mind is a neural network which takes an image of a video game screen as input, and outputs joystick movements and button presses. It is not optimized for any specific video game, and starts out its learning cycle with no knowledge whatsoever of what the blobs on the screen mean -- initially they are just pixels. Deep Mind randomly moves joystick and buttons, and is rewarded with a score. Actions which produce higher score take priority value over those which do not:


Quote:








an algorithm that allows the juvenile A.I. to analyze its previous performance, decipher which actions led to better scores, and change its future behavior accordingly. Combined with the deep neural network, this gives the program more or less the qualities of a good human gamer: the ability to interpret the screen, a knack for learning from past mistakes, and an overwhelming drive to win.



The article illustrates just how different AI is always likely to be from human intelligence -- simply because it lives in a very different sensory world, and comes without human set of needs/desires:


Quote:








"Their current line of research leads to StarCraft in five or ten years and Call of Duty in maybe twenty, and controllers for drones in live battle spaces in maybe fifty. But it never, ever leads to a toddler."



Which is fine by me. I do not remember who said it, but I like this quote: "Asking whether machines can think is as pointless as asking whether submarines can swim."



Here is a more technical article on Deep Mind:



Human-level control through deep reinforced learning





via International Skeptics Forum http://ift.tt/1BgkBm7

Aucun commentaire:

Enregistrer un commentaire