Cooperation through Reinforcement Learning in a Collaborative Game

Ano:

2015

Estado:

Finalizada

Autores:

Pedro Vargas Nobre de Gusmão

Orientadores:

Sumário

This work has the objective of creating agents that are able to work together with other previously unknown teammates and without any a priori coordination for the collaborative game Geometry Friends. Starting with an agent for the circle character that uses a reinforcement learning approach, this work continues its development to further improve its behavior and performance. This process goes through various stages, analyzing the agent's various components and adjusting their behavior when necessary and possible to improve the agent's performance. These mechanisms are then extrapolated for the other character of the game, the square, adapting the components whenever necessary for the specific problems the square character faces. After both agents are completed, the work focuses on problems of coordination between the agents and the difficulties that the implementation brings to the extension of the agents for the cooperative problems. During the various phases of development, the agents are tested determine the impact that each change has on their performance. The tests suggest that the internal functionalities result in some incompatibilities with the intended behavior, since it limits the behaviors that can be added to the agents. While there is an improvement of the circle agent, the square and cooperative performances are below expectation.