Back during my CS (Computer Science) bachelor, Artificial Intelligence was one of the harder classes that I took. However, I suppose in hindsight that was due to my own lack of commitment and time allocation to the course.

One of the biggest blunders I made was when I recieved a measely 30/100 score on my first project assignment. The score was due multiple reasons:

  1. The assignment was due the Monday after our 2-week winter break. And I (apparently) had other plans that holiday.
  2. I had focussed the majority of my time working on an another assignment for my Cryptography class (and I am pretty sure I got full marks for this one).
  3. The assignment had to be programmed in Lisp, Scheme, or Python.
    My Lisp skills were minimal and I had only heard of Scheme. Also, because these were older languages I was worried I would encounter funky issues that would consume my precious time. So those two were out.
    I chose Python. Except, I had never programmed in Python before. At the time (Jan 2013), Python was rapidly gaining popularity and all the new CS freshman were learning Python first. But as a senior, my starting language had been Java, which along with C++ was my tool of choice.
    I needed to minimize my time learning the language, so that I could focus on solving project problem itself. Having heard that Python was quick and straightforward to learn, my choice was easy. Nonetheless, based on my score, it was still a trail by fire.

The project task was clear: create a puzzle solver for a puzzle known as Hitori. The solver was to be implemented in two flavors, a brute force solver and an intelligent solver. The intelligent solver regarded the puzzle as a CSP (constraint satisfaction problem) and required the application of forward checking with MRV (minimum values heuristic).

For the project, I had only managed to finish the brute force solver before I ran out of time. And as any academic veteran would know by the time they become a senior: submitting something deeply flawed is better than submitting nothing. I was hoping for at least 50/100 considering I had completed half the assignment. However, only later did I find out how inefficient my brute force solution was. It was overengineered; I had tried to make elegant and clean much to the detriment of efficiency.

The grade was setback. But I somehow managed to get it together and still barely pass the course.

But the project had stuck with me. Even though the circumstances of completing the project were unfavorable for me, I had thoroughly enjoyed the challenge.

I kept wondering, can I redeem myself?

So 3 weeks ago I decided to revisit the project…

Hitori

Hitori is a puzzle that looks similar to Sudoku. The puzzle consists of a N by N grid filled with numbers 1 to N. To complete the puzzle the player must change the color of some cells from white to black, while upholding 3 rules:

  1. A cell’s given number must be unique to its corresponding row and column, i.e. no duplicates.
  2. No black cells may be connected
  3. All white cells must form a single connected group.

Two cells are connected if they are horizontally or vertically adjacent to each other. Diagonal cells are not considered to be connected.

Hitori is solved when a grid, or game state, is found in which all 3 rules are upheld. Hitori puzzles have 3 types of solutions:

  1. One solution – Puzzles with one solution are considered to be properly constructed
  2. Multiple solutions – Multiple game states are able to uphold all 3 rules. An exhaustive seach must take place to find all correct game states.
  3. No solution – All possible game states have been searched for and exhausted, none upheld all 3 rules.

For this challenge, we were not told which solution applied to the puzzle. We only needed to find one solution, the first correct game state, or no solution. We may also assume that the starting state of the puzzle is legal.

Preparation

Before the solvers could be implemented, I had to implement functions to check for the rules.

Rule 1

Rule 2

Rule 3

This rule was the most fun to implement. In this we check to see if all white cells are connected. To do this I constructed 4 functions:

  1. A function that finds and returns the first white cell, to be used as the starting node for tree traversal.
  2. A function that counts the total amount of white cells in the grid.
  3. A fucntion that via recursion or a stack traverses each adjacent white cell until all white cells within the group have been counted.
  4. A function that compares the return values of function 2 and 3, if they are equal all white cells are connect in a single group.

Brute Force Solver

The brute force solver uses breadth first search in order to find the solution closest to the initial game state. Bread first search is likely to be more effecient than depth first search as we do.

To speed up the brute force solver, it is important to precent your BFS searching obviously wrong states. To do this we need to:

  1. Skip cells with numbers known to be unique (Rule 1). These cells shall remain white and do not have to iterated over (Rule 1)
  2. If the cell we are blackening is next to another already black cell (Rule 2), skip the cell
  3. If the cell we are blackening splits the white cell group in two (Rule 3), skip the cell

follows a simple algoritm:

Create a queue containing the initial game state
Check all 3 rules to see if initial game state is a solution
If not solution:
	Loop until queue is empty or a solution is found
    Pop game state from queue, check for solution
    If not solution:
    	Loop each cell in game state and set cell to black
			If cell value was unique, skip
			If cell has adjacent black cell, skip
			If cell breaks rule 3, skip
			Else:
				Add game state to queue

Intelligent Solver

The intelligent solver is a little more complicated. First, we must consider the problem as a constraint satisfaction problem (CSP). To do this we need to keep track of the domain of each cell. The domain of cells in our case are the possible states the cell is allows to be: black, white, both, or none. Based on the domain our algoritm known what action to take. For example, if the cell is allowed to be both (black or white) our algorithm has not yet visited the cell and its final state is still uncertain. For cells whose domain is either black or white, has been visited and must be their respective states, to do otherwise would break the puzzles rules. A cell whose domain is none, has been visited one or more times, the rules . This game state is to be removed.

Secondly, instead of using a breadth first search we using depth first search. In this solver the depth first search is more appropriate, because with forward checking we want to quickly backtrack from and eliminate incorrect branches, so as not to waste computing power on game state that cannot ever yield a solution. Also, backtracking is achieved sooner in cases where domain is small, this tends to happen more often deeper within the tree.

To apply

Results

Lessons Learned

While revisiting my code, I saw some code abominations. For example, while implementing function that counts the total number of white cells in the puzzle for Rule 3, I came across this terrible :

# Function 1 - Overengineered method
def number_of_white(puzzle):
    count = len(puzzle) * len(puzzle) # Total possible positions
    for i in range(len(puzzle)):
    	# Whites can have any value except 0, so count 
    	# all 0's and subtract from total possible positions
        count = count - puzzle[i].count(0) 
    return count

This is really terrible. I cannot reasonably say what I was thinking while writting this. It is clearly a relic of programming under pressure, in a language where I didn’t fully grasp basic for-loops at the time. So in my code revisit I changed it to this:

# Function 2 - Simple method
def total_white_cells(puzzle):
	count = 0
	for i in puzzle:
		for j in i:
			if j != 0:	
				count = count + 1
	return count

Both the code snippets do the same thing, just differently. The first snippet is a great example of overengineering a problem. The second straightforward and to the point. When comparing the amount of time needed to run either function we can see the difference it makes in microseconds (μs):

Overengineered function: 49.602 μs

Simple function: 7.342 μs

The simple function is 6.75 times faster than the overengineered function