## Brute force algorithms

### Published on 2018-04-25

Exhaustive search (also known as "brute force") is a solving technique that is based on the premise that computers are incredibly fast, and can run millions of operations per second. This technique, although it looks really simple as an idea, should be the first thing that comes to your mind when you look at a particular problem. It is also known under the name "generate and test", because the basic idea consists of creating all possible solutions, and then checking which of them satisfy the criteria of the problem that we're solving. After we understand that, writing a program is actually pretty simple.

For example, what if we have a problem where we need to figure out how many divisors a given number has (let's call it N). Additionally, after analysing this problem and discussing it with a client, we learn that N will be an integer not greater than 1000. In this case, we can use the exhaustive search technique and check all numbers between 1 and 1000, and count how many of them are divisors of N. Because computers are really fast when making calculations, this solution will work extremely quickly, and will print the correct answer in a few milliseconds.

```#include <iostream>
using namespace std;

int countDivisors(int N)
{
int divisors = 0;

for(int i=1; i <= N; i++)
{
if (N%i == 0)
{
divisors++;
}

}

return divisors;
}

int main()
{
int N;
cin >> N;

cout << countDivisors(N) << endl;
return 0;
}
```

Note how we decided to use the exhaustive search technique without even looking at other possible solutions, because we simply don't care (in reality, you will just use a math library, but bear with me). We know that computers are fast, we know the problem we're solving and the possible input parameters, and we just decide that this technique is good enough at this point (although, this problem has other solutions). Of course, whether you decide to go with this idea or try and research more options depends on both the current information we have, and the possibility for changes in the future. For example, an internal system might have 10 people using it at the moment, but may reach 1000 in a year. In 99.9% of the time, that simply won't happen, but as a software engineer, you need to evaluate those options before deciding on an algorithm.

There are a number of tasks where exhaustive search works great, and where you can quickly and easily solve a problem without (unnecessarily) complicating the system; or without searching for more efficient solutions. If, for example, while working as a software developer in a particular company, someone asks you to calculate the average salary paid to employees, and you know that your company has less than 100 employees - then just write a program that connects to the company's payroll system, takes the salary (as a number) for each employee, and then prints the average value. Again, please note that we're not interested in whether or not there are more efficient solutions to this problem (for example, maybe all engineers in the company get the same salary for the same position, and that fact can be used to write a faster solution). Here, we simply decide that we shouldn't complicate things any further (because there are only 100 employees), and so that idea is thrown out the window immediately. Computers can make millions of calculations in a matter of milliseconds, and you need to use that fact as much as possible.

To quote the KISS principle noted by the U.S. Navy in 1960: "Keep it simple and stupid". Many systems work best when kept simple. Unnecessary complexity should be avoided whenever possible.

Let us now consider another example. We have a chessboard (with a standard size, 8x8), and 3 rooks (the rook is a chess piece) placed on the board. On how many different positions on the board can we put a fourth rook, so that he will not be attacked by any of the other rooks? If you haven't played chess before, you should know that rooks can attack pieces that are in the same row (hence horizontally) or in the same column (vertically).

First, note how small the chessboard is. Specifically, since the board has a size of 8x8 = 64, there are only 64 possible positions for the new rook. If a particular position is free (there is no other chess piece on it), and that position is not attacked by another rook (same row or column), then we count it as a valid position.

Let's look at a program that solves this problem. To store the positions of the rooks, we will use a pair of integers pair<int, int>. In the function possiblePositions(), we generate all 8x8 positions for the new rook.

```#include <iostream>
using namespace std;

int possiblePositions(pair<int, int> rook1, pair<int, int> rook2, pair<int, int> rook3)
{
int counter = 0;

for(int row=1; row <= 8; row++)
for(int col=1; col <= 8; col++)
{
if(rook1.first == row || rook1.second == col)
{
//same position as rook1 or attacked by rook1
continue;
}

if(rook2.first == row || rook2.second == col)
{
//same position as rook2 or attacked by rook2
continue;
}

if(rook3.first == row || rook3.second == col)
{
//same position as rook3 or attacked by rook3
continue;
}

//valid position, not attacked
counter++;
}

return counter;
}

int main()
{
pair<int, int> rook1;
cin >> rook1.first >> rook1.second;

pair<int, int> rook2;
cin >> rook2.first >> rook2.second;

pair<int, int> rook3;
cin >> rook3.first >> rook3.second;

cout << possiblePositions(rook1, rook2, rook3) << endl;
return 0;
}
```

A very common problem when working with strings is checking if one string is a substring of another one (i.e. contained in it). For example, "abc" is a substring of "xyabcZ". Let's try and write a program that solves this task by using the exhaustive search technique. The algorithm will use the following strategy: first, it will check whether the smaller string appears at the start of the larger one (the first characters), then whether it appears (starts) from the second position in the larger string, etc.; repeating the process until it finds a valid starting position (if one exists).

```#include <iostream>
#include <string>
using namespace std;

bool contains(string large, string small)
{
for(int i=0; i + small.size() <= large.size(); i++)
{
bool allMatched = true;
for(int j=0; j < small.size(); j++)
{
if(small[j] != large[i + j])
{
allMatched = false;
break;
}
}

if(allMatched == true)
{
//we found the string in "large", and it starts at "i"
return true;
}
}

return false;
}

int main()
{
string large, small;
cin >> large >> small;

cout << contains(large, small) << endl;
return 0;
}
```

In the program above, you should notice that we consider all numbers starting from 0 as the initial position, and as long as the following condition is satisfied (i + small.size() <= large.size()). That's because there is no need to look at positions at the end of the large string if there aren't enough characters remaining in order to find all characters from the small string.

By using recursion, it's also possible to create various potential solutions - like, for example, by efficiently creating all possible subsets of a set of elements. Let's say that we have three people which need to cross a bridge. Then, a program can check if it's best to start by sending 1 person (and who), two people (who), or all three at once. In particular, for {1, 2, 3}, the subsets are {1}, {2}, {3}, {1, 2}, {1, 3}, {2, 3}, {1, 2, 3}. Believe it or not, all these subsets can be created with only one "for" statement.

For the time being, let's repeat once again that the exhaustive search technique is something that you should seriously consider when looking at a new problem. You can assume that a computer can do around 100 million operations in one second. If time constraints allow it, then keep your algorithm simple and use the exhaustive search technique. There is no reason to make things more complicated than they are, or to look for more efficient algorithms than what is needed.

In particular, our first example involved counting the divisors of a given number - so, if we know that the maximum value for N is 1000, then (we also know that) a computer can do a bunch of these calculations in a second. If N is a much bigger number (for example, up to 1 000 000 000 000), then we need to start searching for more efficient algorithms (or just use a math library).

Moreover, if you already know how to calculate the time complexity of your algorithms, then you can easily calculate how a particular solution will work in practice. For example, if the maximum value for N is 1 000 000, then an algorithm with linear complexity O(N) will easily finish in less than a second, but an algorithm with quadratic complexity O(N2) won't.

Note for people who participate in algorithmic programming competitions: Often, you can earn partial points for a task (for example, 30 points out of the possible 100) if you create an algorithm that works for smaller values. For those tasks, if you can't think of a better solution, then use the exhaustive search technique to earn those partial points - because during a competition, every point you win is very important. Additionally, after running the exhaustive search algorithm for smaller values on your computer, you might notice something in the task that you did not notice at the beginning (for example, that all solutions are prime numbers, etc), which can help you to solve the task and earn all (100) points.