Master Thesis: Deep Reinforcement Learning using Capsules in Advanced Game Environments

I just finished my Master’s Thesis at University of Agder. Read it in full here.

Abstract:

Reinforcement Learning (RL) is a research area that has blossomed tremendously in recent years and has shown remarkable potential for artificial intelligence based opponents in computer games. This success is primarily due to vast capabilities of Convolutional Neural Networks (ConvNet), enabling algorithms to extract useful information from noisy environments. Capsule Network (CapsNet) is a recent introduction to the Deep Learning algorithm group and has only barely begun to be explored. The network is an architecture for image classification, with superior performance for classification of the MNIST dataset. CapsNets have not been explored beyond image classification.
This thesis introduces the use of CapsNet for Q-Learning based game algorithms. To successfully apply CapsNet in advanced game play, three main contributions follow. First, the introduction of four new game environments as frameworks for RL research with increasing complexity, namely Flash RL, Deep Line Wars, Deep RTS, and Deep Maze. These environments fill the gap between relatively simple and more complex game environments available for RL research and are in the thesis used to test and explore the CapsNet behavior.
Second, the thesis introduces a generative modeling approach to produce artificial training data for use in Deep Learning models including CapsNets. We empirically show that conditional generative modeling can successfully generate game data of sufficient quality to train a Deep Q-Network well.
Third, we show that CapsNet is a reliable architecture for Deep Q-Learning based algorithms for game AI. A capsule is a group of neurons that determine the presence of objects in the data and is in the literature shown to increase the robustness of training and predictions while lowering the amount training data needed. It should, therefore, be ideally suited for game plays.

AI2017: Towards a Deep Reinforcement Learning Approach for Tower Line Wars

I just published a paper to AI-2017. Read it in full here!

Abstract

There have been numerous breakthroughs with reinforcement learning in the recent years, perhaps most notably on Deep Reinforcement Learning successfully playing and winning relatively advanced computer games. There is undoubtedly an anticipation that Deep Reinforcement Learning will play a major role when the first AI masters the complicated game plays needed to beat a professional Real-Time Strategy game player. For this to be possible, there needs to be a game environment that targets and fosters AI research, and specifically Deep Reinforcement Learning. Some game environments already exist, however, these are either overly simplistic such as Atari 2600 or complex such as Starcraft II from Blizzard Entertainment. We propose a game environment in between Atari 2600 and Starcraft II, particularly targeting Deep Reinforcement Learning algorithm research. The environment is a variant of Tower Line Wars from Warcraft III, Blizzard Entertainment. Further, as a proof of concept that the environment can harbor Deep Reinforcement algorithms, we propose and apply a Deep Q-Reinforcement architecture. The architecture simplifies the state space so that it is applicable to Q-learning, and in turn improves performance compared to current state-of-the-art methods. Our experiments show that the proposed architecture can learn to play the environment well, and score 33% better than standard Deep Q-learning which in turn proves the usefulness of the game environment.

NIK2017: FlashRL: A Reinforcement Learning Platform for Flash Games

I just published a paper to NIK2017. Read it in full here!

Abstract

Reinforcement Learning (RL) is a research area that has blossomed tremendously in recent years and has shown remarkable potential in among others successfully playing computer games. However, there only exists a few game platforms that provide diversity in tasks and state- space needed to advance RL algorithms. The existing platforms offer RL access to Atari- and a few web-based games, but no platform fully expose access to Flash games. This is unfortunate because applying RL to Flash games have potential to push the research of RL algorithms.

This paper introduces the Flash Reinforcement Learning platform (FlashRL) which attempts to fill this gap by providing an environment for thousands of Flash games on a novel platform for Flash automation. It opens up easy experimentation with RL algorithms for Flash games, which has previously been challenging. The platform shows excellent performance with as little as 5% CPU utilization on consumer hardware. It shows promising results for novel reinforcement learning algorithms.

Source code can be found here

 

 

LineWars: Reinforcement Learning idea

Now that DeepRTS engine is in a stable state and i’m ready to research reinforcement learning algorithms for it i need a new side project. Currently i’m planning to create a web-based VNC compatible game which is based of Hero Line Warsa Warcraft III modification.

The objective of this game is to control a hero unit which defends your base.  You defend your base by killing off enemies spawned by the opposing player. Secondary objective of the game is to send units to the opposing player, attempting to overrun him. If you succeed to overrun the opposing player, your units destroys his base and you win the game.

This game should be fairly simple to implement, and hopefully it will only require ~1000 lines of code to implement if with the logic engine and graphics. The reason for implementing such game is that it has a reduced State and Action space compared to DeepRTS and other RTS games, but it is still fairly complex to master.

DeepRTS and ML-Algorithms

In the beginning of 2017, i started researching how to apply tree-search algorithms to real-time-strategy games. microRTS is implementation of a RTS game in its simplest form and allows for research in various areas of machine learning.

I developed numerous Monte-Carlo based tree searches and it gave good results in microRTS. I figured that microRTS was to simple, and i started work on a new engine based on the principles of Warcraft 2. This implementation had a variable complexity level based on which features i enabled in the configuration file.

First version of this game was developed in Python.

When version 1.0 was complete i stumbled upon issues with perfromance. Python was simply not fast enough to support Tree-Search algorithms and yielded only 40~ nodes visited per game frame. For algorithms utilizing the GPU this was not an issue, but CPU bound tasks were a big problem. Starting to optimize the game engine, i utilized Cython which compiles Python to C/C++. Using Cython yielded very good results but at the cost of reduced debugging capabilities. This made further development hard and the engine rewritten in pure C++

The new C++ implementation were much faster and it also embeds Python so that libraries like Tensorflow, Keras and Theano can be utilized for machine learning. Furthermore, it increases the tree-search performance to 10000~ node visits per game frame which is a huge boost from the python implementation.

The game currently has 4 algorithms implemented: Deep Q-Network, Monte-Carle-Tree-Search, Monte-Carlo-Action-Search, Monte-Carle-Search-Direct. Each of the MC algorithms are just different ways of interpreting the score for each node, thus giving very different results.

As we can see from the graph. The plain MCTS outperforms my attempts to make “shortcuts” in the algorithm. MCTSDirect simply skips some intermediate nodes which it attempts to classify as “useless” while MCAS attempts to build a Q-Table of which action should be done to which time.

DQN is a DeepMind’s implementation of a AI for Atari Games. The reason this does not perform well is becuase of the data-representation and depth/size of the network.