Date of Award
Thesis open access
Machine learning is an important part of most current Artificial Intelligence applications as it allows programs to continually improve their performance without outside help. An important testing ground for machine learning algorithms is learning how to play a particular game. In fact, game-playing programs are considered to have advanced the field of AI significantly because they provide an easy way to measure the performance of an AI agent. This project focuses on the games Connect Four and Robocode. The game-playing agent uses a neural network as a heuristic function in a standard Min-Max tree search algorithm. The neural network is trained using particle swarm optimization, an algorithm based on the flocking behavior of birds and fish, in a tournament-style competition against other neural networks. The only game-specific information provided is whether a particular game was won, lost or resulted in a draw. Even with so little information, the algorithm shows that it's capable of creating game-playing agents that are equal to or better than hand-designed programs. In addition, this project tries to extend this approach to games that are not turn-based perfect information games. The game of RoboCode, an educational game that pits AI tanks against each other, is used as a testing ground for the algorithm. Particle swarm optimization is used to train neural networks that perform the targeting subroutine of the tank. The score of the robot at the end of a battle is used as an indication of the performance of the neural network. Some results of this approach are presented, showing that the algorithm can successfully create aiming networks.
Alexiev, Valeri, "Machine Learning through Evolution: Training Algorithms through Competition" (2013). Computer Science Honors Theses. 33.
Creative Commons License
This work is licensed under a Creative Commons Attribution-Noncommercial-No Derivative Works 3.0 License.