Games are played with a strategy. Every player or team would make a strategy before starting the game and they have to change or build new strategy according to the current situation(s) in the game.
You will have to consider computer games also with the same strategy as above. Note that Search Algorithms are the ones that figure out the strategy in computer games.
How it works
The goal of search algorithms is to find the optimal set of moves so that they can reach at the final destination and win. These algorithms use the winning set of conditions, different for every game, to find the best moves.
Visualize a computer game as the tree. We know that tree has nodes. Starting from the root, we can come to the final winning node, but with optimal moves. That is the work of search algorithms. Every node in such tree represents a future state. The search algorithms search through this tree to make decisions at each step or node of the game.
The major disadvantage of using search algorithms is that they are exhaustive in nature, which is why they explore the entire search space to find the solution that leads to wastage of resources. It would be more cumbersome if these algorithms need to search the whole search space for finding the final solution.
To eliminate such kind of problem, we can use combinational search which uses the heuristic to explore the search space and reduces its size by eliminating the possible wrong moves. Hence, such algorithms can save the resources. Some of the algorithms that use heuristic to search the space and save the resources are discussed here −
It is the strategy used by combinational search that uses heuristic to speed up the search strategy. The concept of Minimax strategy can be understood with the example of two player games, in which each player tries to predict the next move of the opponent and tries to minimize that function. Also, in order to win, the player always try to maximize its own function based on the current situation.
Heuristic plays an important role in such kind of strategies like Minimax. Every node of the tree would have a heuristic function associated with it. Based on that heuristic, it will take the decision to make a move towards the node that would benefit them the most.
A major issue with Minimax algorithm is that it can explore those parts of the tree that are irrelevant, leads to the wastage of resources. Hence there must be a strategy to decide which part of the tree is relevant and which is irrelevant and leave the irrelevant part unexplored. Alpha-Beta pruning is one such kind of strategy.
The main goal of Alpha-Beta pruning algorithm is to avoid the searching those parts of the tree that do not have any solution. The main concept of Alpha-Beta pruning is to use two bounds named Alpha, the maximum lower bound, and Beta, the minimum upper bound. These two parameters are the values that restrict the set of possible solutions. It compares the value of the current node with the value of alpha and beta parameters, so that it can move to the part of the tree that has the solution and discard the rest.
This algorithm is not different from Minimax algorithm, but it has a more elegant implementation. The main disadvantage of using Minimax algorithm is that we need to define two different heuristic functions. The connection between these heuristic is that, the better a state of a game is for one player, the worse it is for the other player. In Negamax algorithm, the same work of two heuristic functions is done with the help of a single heuristic function.
Building Bots to Play Games
For building bots to play two player games in AI, we need to install the easyAI library. It is an artificial intelligence framework that provides all the functionality to build two-player games. You can download it with the help of the following command −
pip install easyAI
A Bot to Play Last Coin Standing
In this game, there would be a pile of coins. Each player has to take a number of coins from that pile. The goal of the game is to avoid taking the last coin in the pile. We will be using the class LastCoinStanding inherited from the TwoPlayersGame class of the easyAI library. The following code shows the Python code for this game −
Import the required packages as shown −
from easyAI import TwoPlayersGame, id_solve, Human_Player, AI_Player from easyAI.AI import TT
Now, inherit the class from the TwoPlayerGame class to handle all operations of the game −
class LastCoin_game(TwoPlayersGame): def __init__(self, players):
Now, define the players and the player who is going to start the game.
self.Players = players self.no_player = 1
Now, define the number of coins in the game, here we are using 15 coins for the game.
self.ncoins = 15
Define the maximum number of coins a player can take in a move.
self.maxCoins = 4
Now there are some certain things to define as shown in the following code. Define possible moves.
def possible_moves(self): return [str(a) for a in range(1, self.maxCoins + 1)]
Define the removal of the coins
def make_move(self, move): self.ncoins -= int(move)
Define who took the last coin.
def win_game(self): return self.ncoins ', neural_net.sim([item]))
You can find the test results as shown here −
[1.5, 3.2] --> [1. 0.] [3.6, 1.7] --> [1. 0.] [3.6, 5.7] --> [1. 1.] [1.6, 3.9] --> [1. 0.]
You can see the following graphs as the output of the code discussed till now −
Multi-Layer Neural Networks
In this example, we are creating a multi-layer neural network that consists of more than one layer to extract the underlying patterns in the training data. This multilayer neural network will work like a regressor. We are going to generate some data points based on the equation: y = 2x2+8.
Import the necessary packages as shown −
import numpy as np import matplotlib.pyplot as plt import neurolab as nl
Generate some data point based on the above mentioned equation −
minVal = -30 maxVal = 30 Npoints = 160 x = np.linspace(minVal, maxVal, Npoints) y = 2 * np.square(x) + 8 y /= np.linalg.norm(y)
Now, reshape this data set as follows −
data = x.reshape(Npoints, 1) labels = y.reshape(Npoints, 1)
Visualize and plot the input data set using the following commands −
plt.figure() plt.scatter(data, labels) plt.xlabel('Dimension 1') plt.ylabel('Dimension 2') plt.title('Data-points')
Now, build the neural network having two hidden layers with neurolab with ten neurons in the first hidden layer, six in the second hidden layer and one in the output layer.
neuralNet = nl.net.newff([[min_val, max_val]], [10, 6, 1])
Now use the gradient training algorithm −
neuralNet.trainf = nl.train.train_gd
Now train the network with goal of learning on the data generated above −
Error = neuralNet.train(data, labels, epochs = 1000, show = 100, goal = 0.01)
Now, run the neural networks on the training data-points −
Output = neuralNet.sim(data) y_pred = Output.reshape(Npoints)
Now plot and visualization task −
plt.figure() plt.plot(Error) plt.xlabel('Number of epochs') plt.ylabel('Error') plt.title('Training error progress')
Now we will be plotting the actual versus predicted output −
x_dense = np.linspace(minVal, maxVal, Npoints * 2) y_dense_pred = neural_net.sim(x_dense.reshape(x_dense.size,1)).reshape(x_dense.size) plt.figure() plt.plot(x_dense, y_dense_pred, '-', x, y, '.', x, y_pred, 'p') plt.title('Actual vs predicted') plt.show()
As a result of the above commands, you can observe the graphs as shown below −