This note is a part of my Zettelkasten. What is below might not be complete or accurate. It
is also likely to change often.

6th June, 2021
#
Search Algorithms

### Difference between Tree and Graph Search

### Uninformed search techniques

### Informed Search

## Constraint Satisfaction Problems

## Adverserial Search

## Stochastic Search

This is a review of my notes from CS6300 Artificial Intelligence.

Search algorithms are usually judged on the basis of:

- Completeness - Is the algorithm guaranteed to find the result?
- Optimality - Does the algorithm always find the optimal solution (with the lowest path cost)
- Time Complexity - How long does the algorithm take to find a solution?
- Space Complexity - How much memory is needed to perform a search?

In Tree search, the closed list (list of visited nodes) is not stored. In cases with cyclic graphs, this can result in the search algorithm getting stuck in cycles. The advantage of this approach is that memory is not used for the closed list.

In Graph search, the closed list is stored (and used to trim the graph)

We started off with Uninfomed search techniques.

- Depth First Search (DFS) - explore the newest node first. DFS is not complete in case of tree search. It is also not optimal
- Breadth First Search (BFS) - explore closest nodes first. This is guaranteed to be complete and optimal (in a finite graph)
- Uniform Cost Search (also called Dijkstra's Algorithm) - a variant of BFS with variable step cost.
- Backtracking Search - a variant of DFS where each successor is explored in turn and the current search description is modified in situ to save memory.
- Depth Limited Search - A variant of DFS which alleviates the problem of infinite search states by limiting the depth the search should go until
- Iterative Depth First Search - Multiple iterations of Depth Limited Search with incremented depth limit in each iteration.

If **b** is the branching factor and **d** is depth, the time complexity of all these algorithms are a factor of **O(b ^{d})**.

Informed search functions use some evaluation funtion **f(n)** to work. This evaluation function uses the heuristic function **h(n)** along with the cost function **g(n)**. This heuristic function is the estimated cost of the cheapest cost from node n to a goal state. It should be 0 when n is the goal node and increase the further the node is from the goal.

- Greedy Best-first Search - expands the node with the minimal
**h(n)**. Highly suboptimal but useful.**f(n) = g(n)** - A* search - variant of Uniform Cost search where the node with the least
**f(n) = g(n) + h(n)**is expanded. A* search is optimal if the heuristic is consistent.

Performance of informed searches depend on the quality of the heuristic.

These are a type of search problem where along with variables and domains, like any search problem, there are constraint which specify rules for how one variable's value restricts others'.

A solution to a CSP problem would be such that no constraint is violated. One example for a CSP problem is the sudoku puzzle. Some approaches to solve this type of search problems are:

- Arc Consistency Algorithm - tests if a CSP state is consistent with all the constraints or not. If it is consistent, the domains are reduced by removing all inconsistent values.

- Create a queue filled with all the arcs. Arcs are all edges which connect two nodes if they are bound together by any constraint (they are directional).
- While the queue has items
- Pop an arc from the queue
- Enforce the constraint to try to reduce the domains of both the nodes in the arc. If the domain is actually reduced, add all outgoing arcs from the reduced nodes back into the queue.
- If domain of any node goes to zero, the solution is not consistent

- End while

- DFS(Backtracking) with Arc Consistency - Alternating the Arc consistency algorithm with DFS makes it easier to solve CSPs since the arc consistency reduces the branching factor of the search significantly.
- Proper use of heuristics can further cut down on the amount of computation needed to find the solution. Choosing the least constraining value of a node to search is one. Another heuristic is the Minimum remaining values heuristic - choosing the node with smallest domain to explore further.

A special variant of general search where the search is run on a decision tree which cycles through decisions taken by you and your opponents. The determinant version is called min-max search.

What if the agent has to choose an action at a node and the outcome of that is not determinant but probabilistic? To choose between actions with different outcome probabilities, the agent needs to calculate the expected utility - the sum of the probability-weighted utility of each outcome - of each action.

Using this, the tree can be traversed and right decisions can be taken. A subclass of this type of search is Markov Decision Processes