Google's DeepMind Artificial Intelligence System Learns How To Parkour

July 11, 2017


Seen here looking like a very early prototype of Snapchat's new dancing hotdog filter, this is a video demonstration of Google's DeepMind artificial intelligence system learning how to navigate a virtual parkour course through a process of reinforcement learning. Some more info while I bet my coworkers I can jump from the top of one cubicle wall to another without falling:

At its most basic level, the system was as follows: the faster the AI moved across the terrain, the greater the rewards. Additional incentives and penalties were added for more complex programs.

The AI used a trial and error system to figure out how to move forward as fast as possible without "terminating."

Awesome, so apply this technology to one of Boston Dynamics' robots and you've got an unstoppable killer robot army ready to start wreaking havoc on humanity. Honestly, I'm all for it. I'm tired of caring. If nobody is going to take the robot threat seriously then I'll just sit back and watch the world burn from the comfort of a folding chair on the moon. "How are you getting there?" Portals. You actually thought you were going to stump me there for a second, didn't you?

Keep going for the video.

Thanks to Andreas, who agrees parkour hotdogs are cool and all, but only if the course ends in your mouth.

Previous Post
Next Post