AI and games have always been connected. From the simplicity of tic-tac-toe in the ’50s to the strategic intricacies of modern board games, the world of algorithms has been on an ever-evolving quest to conquer the gaming realm. Picture this: a timeline where each decade unfolds a new chapter, introducing challenges that algorithms, in their relentless pursuit, strive to overcome.
Back in the ’50s, tic-tac-toe was the proving ground. Algorithms like Minimax took their baby steps, showcasing the potential to predict outcomes and strategize in a confined space of possibilities. Moving into the late ’90s, chess became the battleground, and Deep Blue’s victory against Garry Kasparov demonstrated how algorithms could adapt to a more complex environment.
The game changed once again in 2007 with Go, a game notorious for its vast possibilities. Enter AlphaGo and the world witnessed a paradigm shift. Rule-based approaches took a backseat, making room for the formidable combination of neural networks and machine learning.
Now, in the 21st century, our attention turns to a very interesting experiment with an even more complex board experience called Commands and Colors: Ancients, a modern board game that throws conventional algorithms for a loop, where incomplete information, chance elements, and almost infinite possible outcomes for player moves mirror the unpredictability of the real world.
The strategy to victory? Taking Monte Carlo Tree Search further than ever before.
About the video
In the video, Technical Principal @Thoughtworks Italia Matteo Vaccari tackles a super complex strategy game called Command and Color: Ancients, showcasing an innovative, yet challenging, approach to creating an AI system that can play and beat a human mind.
What you’ll learn
Introduction to Game-Defeating Algorithms:
- Trace the evolution of game-defeating algorithms, from tic-tac-toe to the conquest of Go in 2007.
- Highlight the challenges posed by each game and the algorithmic breakthroughs that led to their defeat.
- Monte Carlo Tree Search (MCTS):
- Unveil the workings of MCTS, a fundamental technique in modern game AI.
- Explore the loop of selecting a leaf node, expanding it, performing playouts, and backpropagating results.
- Learn about the UCB1 algorithm, explaining its significance in addressing challenges posed by games with high branching factors.