Welcome to the World of DeepNash: Mastering Stratego with AI
Research
- Published
- Authors
-
Julien Perolat, Bart De Vylder, Daniel Hennes, Eugene Tarassov, Florian Strub, and Karl Tuyls
DeepNash learns to play Stratego from scratch by combining game theory and model-free deep RL
Game-playing artificial intelligence (AI) systems have advanced to a new frontier. Mastering the classic board game Stratego, more complex than chess and Go, and craftier than poker, is the latest breakthrough. In a groundbreaking research published in Science, we introduce DeepNash, an AI agent that achieved human expert level in Stratego by self-play learning.
DeepNash’s innovative approach, blending game theory and model-free deep reinforcement learning, enables it to converge to a Nash equilibrium, making its gameplay hard to exploit. DeepNash has even ranked in the top three among human experts on the online platform Gravon.
Historically, board games have served as benchmarks for AI progress, offering insights into human-machine strategic interactions. Unlike chess and Go, Stratego involves imperfect information, challenging AI systems. DeepNash’s achievements go beyond traditional game tree search methods.
The significance of mastering Stratego extends beyond gaming, as it paves the way for advanced AI systems in real-world scenarios with limited information. DeepNash’s success demonstrates its potential in solving complex problems amidst uncertainty.
Paper authors
Julien Perolat, Bart De Vylder, Daniel Hennes, Eugene Tarassov, Florian Strub, Vincent de Boer, Paul Muller, Jerome T Connor, Neil Burch, Thomas Anthony, Stephen McAleer, Romuald Elie, Sarah H Cen, Zhe Wang, Audrunas Gruslys, Aleksandra Malysheva, Mina Khan, Sherjil Ozair, Finbarr Timbers, Toby Pohlen, Tom Eccles, Mark Rowland, Marc Lanctot, Jean-Baptiste Lespiau, Bilal Piot, Shayegan Omidshafiei, Edward Lockhart, Laurent Sifre, Nathalie Beauguerlange, Remi Munos, David Silver, Satinder Singh, Demis Hassabis, Karl Tuyls.