Automatic generation of neural game controller using single and bi-objective evolutionary optimization algorithms for RTS Game

Chang, Kee Tong (2015) Automatic generation of neural game controller using single and bi-objective evolutionary optimization algorithms for RTS Game. Masters thesis, Universiti Malaysia Sabah.

[img] Text
24 PAGES.pdf

Download (324kB)
[img] Text
Restricted to Registered users only

Download (1MB)


Digital gaming industry grows very fast and it becomes one of the most profitable industries since last decade. A good game is very profitable. Hence, the developers are trying hard to include Artificial Intelligence (AI) technologies for generate better game to attract more players, especially for Real-Time Strategy (RTS) game. Nevertheless, there are many problems in designing a good RTS game on top of improving the visualization for better attraction such as, level of difficulty, AI bots, formation marching, position of characters or objects, etc.. These problem can be solved using AI technology. Reinforcement is the process of strengthening an army and it is a crucial issue in gaming design as well. It is also the focus of most players in planning their gameplay strategy. There are researches related to the reinforcement issues and the researchers showed that AI can be the solution. Evolutionary Computing (EC) is chosen as one of the AI method for its stochastic features and it shows promising results in many fields. Therefore, the main objective of this research is to investigate the performance of single objective and bi-objectives of the hybridised EC as a RTS game controller for reinforcement issue. The proposed EC methods are Genetic Algorithm (GA), Differential Evolution (DE), Evolutionary Programming (EP), and Pareto-based Differential Evolution (PDE). The sub-objectives are: 1) to create preliminary optimization experiment with different crossover and mutation rates using GA and Feed-Forward Artificial Neural Networks (FFNN). After determine the rates another single objectives algorithm is tested. Hence, the second sub-objective is 2) to evolve RTS controllers using DE and FFNN. After that, a bi-objectives algorithm is tested for comparing purposes and this contributed for the next two sub-objectives that is 3) to test the feasibility for implementing the PDE hybrid FFNN. 4) to compare single objective and multi-objective optimization algorithms performances. Then, Ch’ng and Teo showed that EP can generated promising results in their research. Henceforth, EP is introduced as a benchmarking algorithm and this created our last sub-objective. That is 5) to test the performance for EP, DE, PDE and FFNN applied under an identical environment. The experimental results show that all the algorithms applied were able to generate good solutions for solving the reinforcement issues. The first experiment result shows there is no significant difference among the combination of crossover and mutation rate. Thus, selective crossover rate and mutation rate from a literature was referred and used in the later experiments. The second experiment result shows both GA and DE algorithms can generate optimal solutions with very high fitness scores but the cost of spawning was extremely high. The next experiment result shows the generated PDE controllers obtained lower fitness score but the spawning strategy was better compared to both GA and DE controllers. In the last experiment, the results showed that DE and EP algorithms can generate superior controllers whilst PDE is only capable to generate sub-optimal controllers. Nevertheless, the solutions provided by PDE was 1) cheaper in term of spawning cost, 2) less time consuming, 3) strong defensive strategy in the early stage of the gameplay and 4) more practical during gameplays.

Item Type: Thesis (Masters)
Keyword: Digital gaming ,Artificial Intelligence, AI ,Technologies ,Real-Time Strategy game
Subjects: ?? QA76 ??
Department: FACULTY > Faculty of Computing and Informatics
Depositing User: DG MASNIAH AHMAD -
Date Deposited: 21 Jan 2021 14:46
Last Modified: 25 Jan 2024 16:21

Actions (login required)

View Item View Item