My initial approach to the tiling problem was to find all the possible configurations, calculate a measure of the distance between same-colour tiles for each, and then select the configuration with the maximum distance. While this does reliably find the optimal solution, it also very quickly becomes infeasible as the number of tiles increases.
This problem is a bit different that optimization problems I've worked on in the past: it's discrete, rather than continuous, and rather than being a function of a handful of variables that can each take a large number of values, it's a function of a large number of variables (the number of tiles) that can each take only a handful of values (i.e colours). Worse yet, there's no real way to consider the problem locally: you can't easily see if changing the colour of a given tile will increase or decrease the value of the distance function without looking at a bunch of the surrounding cells. So for any given variable (i.e. tile), it's not a small calculation to determine a priori what change of value would increase your overall distance measure. You basically have to try each change, and calculate the resulting value.
Since there didn't seem to be any reasonable way to adapt the optimization algorithms that I already knew, it was clear that I was going to have to try something new (and better suited to this particular application). Enter Simulated Annealing. The concept is pretty simple: we evaluate "neighbouring states", and move into that state if it's better. But also, to avoid getting stuck at a local optimum, we move into a worse state with some probability, that decreases over time. This is analogous to annealing in metallurgy, where the probability is due to the temperature and reduces as the metal cools.
In this case, I defined "neighbouring states" as configurations with two of the tile colours swapped. I did it this way, rather than just changing the colour of a particular tile, because I'm interested in solutions with even numbers of tiles of each colour, so I don't want to waste any effort exploring solution-spaces without evenly balanced colours. So, for each iteration of the loop, I choose two random tiles, swap them, then calculate the resulting pattern's cost measure. If it's smaller than before, I keep the swap. If not, I keep it only if a random value between \(0\) and \(1\) is greater than
where \(c_1\) and \(c_2\) are the costs of the old and new configurations respectively (in this scenario, \(c_2 > c_1\)), and \(T\) is the temperature, which is decremented on every loop iteration. I repeat this until \(T=1\), at which point the current configuration is returned as the "optimal" solution.
As referenced in the previous post, I had some issues with obtaining reasonable solutions, because the distance function I was originally using almost never changed when only two tiles were swapped because it was considering only the worst-case, rather than an average. Switching to an inverse-least-squares cost function solved this problem. Whatever cost function you use, the initial temperature needs to be calibrated, so that the algorithm loops long enough to settle on an optimal solution.
Note that switching to a inverse-square cost function means that the overall problem switched from maximization to minimization. If we were still trying to maximize the distance, we would use the following probability instead, to accept a worst configuration (i.e. with a smaller distance):
where \(d_1\) is the old (larger) distance and \(d_2\) is the new (smaller) distance.
Other than being careful about signs in the probability equation, and choosing an initial temperature, the algorithm is pretty simple, and seems to work reasonably well, allowing use to get solutions for grids too large to solve by brute force.