Alphabet’s DeepMind AI Beats Humans in Multiplayer Shooter

AI was playing a modified version of Quake III Arena with Capture the Flag game mode.

Alphabet’s DeepMind AI Beats Humans in Multiplayer Shooter

Photo Credit: Handout/ DeepMind/ AFP

This illustration shows agents playing Capture the Flag, showing a range of behaviours.

Advertisement

Watch out professional gamers: machines may soon be coming for your jobs.

A team of programmers at a British artificial intelligence company has designed automated "agents" that taught themselves how to play a competitive first-person multiplayer video game shooter, and became so good they consistently beat human beings.

The work of the researchers from DeepMind, which is owned by Google's parent company Alphabet, was described in a paper published in Science on Thursday and marks the first time the feat has ever been accomplished.

To be sure, computers have been flexing their dominance over humans in one-on-one turn-based games such as chess ever since IBM's Deep Blue beat Gary Kasparov in 1997. More recently, a GoogleAI agent beat the world's number one Go player in 2017.

But the ability to play multiplayer games involving teamwork and interaction in complex environments had remained an insurmountable task.

For the study, the team led by Max Jaderberg worked on a modified version of Quake III Arena, a seminal shooter that was first released in 1999 but continues to thrive in the eSports world.

The game mode they chose was "Capture the Flag," which involves working with teammates to grab the opponent team's flag while safeguarding your own, forcing players to devise complex strategies mixing aggression and defence.

After the agents had been given time to train themselves up, their prowess was matched up against professional games testers.

"Even after 12 hours of practice, the human game testers were only able to win 25% of games against the agent team," the team wrote, while the agents' performance remained superior even when their reaction times were artificially slowed down to human levels.

New steps for AI
The programmers relied on so-called "Reinforcement Learning"  (RL) to imbue the agents with their smarts.

"Initially, they knew nothing about the world and instead were doing completely random stuff and bouncing about the place," Jaderberg told AFP.

The agents were taught to reward themselves for capturing the flag, but the team also devised a series of new and innovative methods to push the boundaries of what is possible with RL.

"One of the contributions of the paper is each agent learns its own internal reward signal," said Jaderbeg, meaning that the AI players gave themselves a pat on the back of varying magnitude for accomplishing tasks such as picking up the flag or successfully shooting an opponent.

Next, they found that training a population of agents together, rather than one at a time, made the population as a whole learn much faster.

They also devised a new architecture of so-called "two timescale" learning, which Jaderberg likened to the thesis of the book "Thinking Fast and Slow."

"You have one part of the agent which kicks very quickly, it updates its own beliefs very quickly, and you have another part of the agent, which updated belief at a slower rate, and these two beliefs influence each other and help shape the way the agent learns about the world," he said.

Finally, randomising the map for each new match was key. "That meant the solutions that the agents find have to be general - they cannot just memorise a sequence of actions," said co-author Wojciech Czarnecki.

Ethics questions
The team did not comment, however, on the AI's potential for future use in military settings.

DeepMind has publicly stated in the past that it is committed to never working on any military or surveillance projects, and the word "shoot" does not appear even once in the paper (the process is described instead as tagging opponents by pointing a laser gadget at them).

Moving forward, Jaderberg said his team would like to explore having the agents play in the full version of Quake III Arena and find ways his AI could work on problems outside of games.

"We use games, like Capture the Flag, as challenging environments to explore general concepts such as planning, strategy and memory, which we believe are essential to the development of algorithms that can be used to help solve real-world problems."

Comments

For the latest tech news and reviews, follow Gadgets 360 on X, Facebook, WhatsApp, Threads and Google News. For the latest videos on gadgets and tech, subscribe to our YouTube channel. If you want to know everything about top influencers, follow our in-house Who'sThat360 on Instagram and YouTube.

Further reading: Alphabet, DeepMind
Motorola One Action Spotted on Geekbench, Tipped to Be Powered by Exynos 9609 SoC
Share on Facebook Gadgets360 Twitter Share Tweet Snapchat Share Reddit Comment google-newsGoogle News
 
 

Advertisement

Follow Us

Advertisement

© Copyright Red Pixels Ventures Limited 2024. All rights reserved.
Trending Products »
Latest Tech News »