Uninformed search algorithms for Intelligent planning and decision-making in AI and Robotics

The relevance of Breadth-first, Depth-first, and uniform cost search algorithms in the planning space

aswath govind
Geek Culture

--

Photo by Michael Fousert on Unsplash

Planning and problem-solving in AI and robotics are terms that have almost become synonymous and their inert differences have faded away in recent years. But these terms differ a lot for different people coming from different engineering backgrounds. Say a roboticist looking to solve a motion planning problem will look for a plan or sequence of actions that would make a robot execute a specific task, whereas, in the AI community, it can be seen as a problem of arriving at a finite set of actions that would take a Rubik's cube or eight puzzles problem from a particular start state to a desired goal state. As more and more decision theory-based algorithms are incorporated into various engineering domains, planning algorithms find their application across various domains like Smart video game characters, Robot motion planning, Autonomous vehicles, better protein folding, and smarter drug design.

Algorithms employed for planning searches for feasible paths or sequences of actions that would take an agent from a start state to a desired goal state. It explores the environment of the agent based on a specific strategy to arrive at a plan. Exploration of the agent's environment involves optimal and systematic searching of feasible plans across different states of the agent. A state here could mean a position and orientation of the robot or a particular configuration of tiles in an eight-puzzle problem. Whereas a state space refers to all possible states that an agent can take or attain when a feasible action is being executed from a state.

In this article, I have touched upon the un-informed search algorithms that work on finite and discrete state spaces like a 2D grid with finite dimensions. I have applied these algorithms to move an agent on a 2D Grid. Based on the flavor and nature of the algorithm’s search for finding a plan that helps a robot or agent to move from a start node to a goal node the feasibility and optimality of the algorithm change. Optimality here means finding an executable plan that is optimized for a particular requirement like cost, distance, time, etc. Whereas feasibility is just finding a plan regardless of what it takes to arrive at the goal state. It is also important for the search algorithm to be systematic because it has to give an executable plan in finite time that takes the agent from the start state to the goal state if a feasible plan exists, else it should be able to say that it does not exist.

For demonstration purposes, I have considered a robot that is capable of moving in a 2D Grid. At any given state the robot can move in four directions (up, down, right, and left). If any of the above actions are applied to the robot from its current state (CurrX, CurrY), the robot will attain a new state (NewX, NewY). Suppose if the robot is in (CurrX, CurrY) = (0,0) and if an action to move up U = {0,+1} is applied, then the robot will attain a new state (NewX, NewY) = (0,1).

Here, the whole objective of the search algorithm is to find a feasible plan that consists of a sequence of actions that would take a robot from the start node (StartX, StartY) to the goal node (GoalX, GoalY). The whole search problem becomes more complicated when it is put to test in much more complicated labyrinths like the one shown below.

A labyrinth constructed using Matplotlib (Photo by author)

The blue square denotes the start node and the green cell denotes the Goal node. The robot has to navigate from the start to the goal through the free space(denoted in light blue) at the same time it should avoid all the obstacles in its way(denoted in light green).

Here, I would like to present three algorithms that are capable of searching for a feasible path in discretely finite spaces. All of them are categorized as General forward search algorithms and it widely popular in the computer science community as they find their applications in various data structures and search-related problems. But this article is an effort to show a perspective on how they have found relevance in planning algorithms that help an agent or a robot to plan their actions to execute a particular task.

The algorithms that I am presenting in this article are:-

  1. Breadth-First Search
  2. Depth- First Search
  3. Uniform-Cost Search

Before digging into each of these algorithms, we shall have a look at the general template of the forward search algorithms. The pseudocode mentioned below paints an abstract picture of how the algorithm is constructed to perform different searches. To get the most out of it, It is important to build some related vocabulary.

Unvisited:- States or nodes that are yet to be visited or explored.

Successor or neighbor node:- A neighbor node is a node that can be visited from any current node by performing a certain feasible action. (Denoted with orange color in the animation given below).

In our case, a set of neighbor nodes or successor nodes for any given node is the adjacent nodes along the up, down, right and left directions.

Dead:- A node is declared to be dead when it is visited and all its neighbors are also visited.

Alive:- Alive nodes are states that have been visited but their neighbor nodes are yet to be visited. (Denoted with light orange or red color in the animation given below).

# FORWARD SEARCH ALGORITHM TEMPLATE
List.Insert([startX,startY])
Visited.append([startX,startY]])

While List is not empty do
X -> List.NodeForExploration() #returns and deletes the chosen node from List

if(X.isGoalNode()):
return Plan
for all possible action from X:
newX,newY = [CurrX,CurrY] + action
if [newX,newY] is not visited:
Visited.append([newX,newY])
List.append([newX,newY])

return FAILURE

Initially, the algorithm starts by appending the start node to a list that acts like a fringe that keeps track of all the states that are yet to be explored. The List here should act like a priority Queue. The algorithm then iterates as long as the fringe list is empty. It takes a particular node from the Queue based on the priority function and started exploring all its neighbor nodes. If in case any of the neighbor nodes are not visited before then it is added to the list. This iteration goes as long as the current node from the fringe matches with the goal node or if the list is exhausted and none of the nodes match with the goal node. In the earlier case, a feasible plan is returned. The optimality and nature of the algorithm's search are dependent on the priority function that is defined to pop out a node for exploration during each iteration. Across different search algorithms presented in this article, the above-mentioned general template remains the same. Whereas the priority function that returns a node for further search during each iteration differs based on the strategy we choose to implement.

Breadth-First Search:- In BFS, the strategy used to decide upon a node for exploration is First-in First-out, here we select nodes using the first-come-first-serve principle. The algorithm expands the nodes uniformly and in the implementation, we can see that the nodes are expanded in wavefronts rather than picking a specific direction for exploration. The algorithm is systematic. My implementation of the breadth-first search in python has been given below. Here I have two lists, one to maintain the data on whether the list is visited and another as a fringe. Another dictionary is used to keep track of and reconstruct the path. Again, The most critical aspect here is that in the FIFO principle, I always use the List[0] as a node for expansion. This makes sure that the longest waiting node is expanded first.

def breadthFirstSearch(problem):
path_found = False

List = []
dead = []

Goalnode = problem.getGoalState()
Startnode = problem.getStartState()

dict_pathRecon = dict()
dict_pathRecon[str(Startnode)] = None

temp_dict_pathRecon = dict()
temp_dict_pathRecon[str(Startnode)] = None

List.append(Startnode)
dead.append(Startnode)

while len(List) > 0:
curr = List[0]
List.remove(List[0])
if(curr == Goalnode):
path_found = True
print("Found goal")
break

for i in (problem.getSuccessors(curr)):
next_node = i[0]
if(next_node not in dead):
List.append(next_node)
dict_pathRecon[str(next_node)] = curr
temp_dict_pathRecon[str(next_node)] = i
dead.append(next_node)

print("the total number of nodes explored:- ",len(dead))

# Path reconstruction
path = []
if path_found:
while dict_pathRecon[str(Goalnode)] is not None:
path.append(temp_dict_pathRecon[str(Goalnode)][1])
Goalnode = dict_pathRecon[str(Goalnode)]
path.reverse()

print("The path here is:-",path,len(path))

return path

All plans with n steps are explored completely before plans with n+1 are explored. As the BFS approach returns the shortest path from the start to the goal. So the algorithm is optimal. The asymptotic running time is O(V+E) where V is the number of vertices and E is the Edges. The output of the above algorithm is given below.

Breadth-First Search for a feasible plan in a 2D Grid

Depth-first Search:- In DFS, the priority function follows the Last-in, First-out principle. It uses a stack to push and pop nodes out of the fringe. The algorithm does not expand the node uniformly and in the implementation, we can see that the nodes are expanded aggressively along a direction rather than in wavefronts. The algorithm is not systematic for an infinite state space, but it still holds to be systematic in most countably finite spaces. The most critical aspect here is that using the LIFO principle, I always use List.pop() as a node for expansion. This is the only difference between BFS and DFS. This makes sure that the longest waiting node is expanded last. The algorithm facilitates aggressive expansion along a direction, which might seem useful at times. Again, the asymptotic cost is O(V+E).

def depthFirstSearch(problem):

path_found = False

List = []
dead = []

Goalnode = problem.getGoalState()
Startnode = problem.getStartState()

dict_pathRecon = dict()
dict_pathRecon[str(Startnode)] = None

temp_dict_pathRecon = dict()
temp_dict_pathRecon[str(Startnode)] = None


List.append(Startnode)
dead.append(Startnode)

while len(List) > 0:
curr = List.pop()
if(curr == Goalnode):
path_found = True
print("Found goal")
break

for i in (problem.getSuccessors(curr)):
next_node = i[0]
if(next_node not in dead):
List.append(next_node)
dict_pathRecon[str(next_node)] = curr
temp_dict_pathRecon[str(next_node)] = i
dead.append(next_node)

print("the total number of nodes explored:- ",len(dead))

# Path reconstruction
path = []
if path_found:
while dict_pathRecon[str(Goalnode)] is not None:
path.append(temp_dict_pathRecon[str(Goalnode)][1])
Goalnode = dict_pathRecon[str(Goalnode)]
path.reverse()

print("The path here is:-",path,len(path))


return path
Breadth-First Search for a feasible plan in a 2D Grid

Uniform cost search:-

UCS or the uniform cost search is like an improvement on top of the Breadth-first search, It is also said to be a variant of Dijkstra’s algorithm. Wherein the algorithm accounts for traversal cost from node A to node B. The priority function chooses a node for expansion after sorting the available nodes in ascending order and then it picks the node with the least cumulative cost. The cumulative cost is the total cost spent to arrive at that particular node. This algorithm again searches the environment in wavefronts and gives a path with the most minimal cost to reach a goal node. The python implementation of Uniform cost search and its corresponding result are given below:-

def uniformCostSearch(problem):

source = problem.getStartState()
destination = problem.getGoalState()

path_found = False

visited = []

node_dict = dict()

startnode_as_key = str(source[0])+','+str(source[1])
node_dict[startnode_as_key] = 0

visited.append(source)

dict_pathRecon = dict()
dict_pathRecon[str(source)] = None

temp_dict_pathRecon = dict()
temp_dict_pathRecon[str(source)] = None

while len(node_dict) > 0:

node_with_min_val = min(node_dict, key=node_dict.get)
costOf_node_with_min_val = node_dict[min(node_dict, key=node_dict.get)]
curr_pre_temp = node_with_min_val.split(',')

del node_dict[node_with_min_val]
curr_node = [int(curr_pre_temp[0]),int(curr_pre_temp[1])]

if(curr_node == destination):
print("Found path")
path_found = True
break

c_cost = costOf_node_with_min_val

neighbours = problem.getSuccessors(curr_node)
visited.append(curr_node)

for neigh in neighbours:
next_node = neigh[0]
next_node_action = neigh[1]
next_node_cost = neigh[2]
if next_node not in visited:
node_dict[str(next_node[0])+','+str(next_node[1])] = next_node_cost + c_cost
dict_pathRecon[str(next_node)] = curr_node
temp_dict_pathRecon[str(next_node)] = neigh

path = []
if path_found:
while dict_pathRecon[str(destination)] is not None:
path.append(temp_dict_pathRecon[str(destination)][1])
destination = dict_pathRecon[str(destination)]
path.reverse()
print("the total number of nodes explored:- ",len(visited))
return path

return path

As you can see here that I have used this particular statement node_with_min_val = min(node_dict, key=node_dict.get) to sort and pick the node with the least cumulative cost. This strategy in picking the next node helps us to find a path with the least cost. The cost here could be different for different applications, sometimes it could be distance or time or ease of traversal, etc. In the results, you can see that the path returned by the algorithm avoids the grey cells that have a higher cost of traversal than the free blue cells with unit cost.

Path exploration using Uniform cost search on a 2D Grid

Comparing and contrasting the three algorithms presented above, all of them are capable of returning a feasible plan if it exists. Only the BFS and UCS give an optimal plan from the source to the destination. The biggest limitation of these algorithms is that they do not work well on larger state spaces with larger dimensions, for example, these algorithms would explode if it is run on 3D spaces. The running time will be so high that it is not practically applicable for most applications though they seem to work on finite state spaces in smaller dimensions (especially 2D).

References:-

  1. Planning algorithms by Prof. Steven LaVelle
  2. Chapter 3 of “Principles of Robot Motion, Theory, Algorithms, and Implementation” by Howie Choset, Kevin Lynch, Seth Hutchinson, George Kantor, Wolfram Burgard, Lydia Kavraki, and Sebastian Thrun.
  3. I also would like to thank my Professor Dr. Bijo Sebastian (Engineering design — IIT Madras) for teaching me these algorithms and providing me with the code for running the visualizations. => https://bijosebastian.wordpress.com/

--

--