Comments:"Genetic algorithms, Mona Lisa and JavaScript + Canvas - Nihilogic"
URL:http://blog.nihilogic.dk/2009/01/genetic-mona-lisa.html
Genetic algorithms
So, the basic idea of genetic algorithms is that you have a population of individuals, each carrying a DNA string representing a possible solution to the problem (in this case, the polygonal likeness of Mona Lisa). The initial population is assigned random DNA and subsequent generations are then created by mixing the DNA of the fittest individuals of the current population. In order to ensure diversity in the population, there's a small chance of mutation where a DNA value is randomly changed. However, Roger Alsing's project actually uses a population of only one parent, making it more like a hill climbing algorithm where the current solution is altered slightly and if it's a better fit, the old one is discarded.I tried to go for a more proper genetic algorithm approach with an adjustable population size, selection, DNA mixing and everything. Now, Roger used just short of a million generations to get to his result (which was very accurate). It took 3 hours for his (compiled) program to generate the resulting image, and of course, even if JS engines are getting faster, it's going to take a bit more time than that to get as nice a result as his using JavaScript. Still, even after a few hundred generations/a few minutes in my demo, with the default parameters, you should see the shape of Mona Lisa starting to take form. I'm unfortunately not very patient, so I'm not sure if my experiment can even create as good an approximation, given the necessary time.
There are also a few other images you can play with. I've made the images pretty small (100x100) so that evolution would be as speedy as possible. The fitness function actually uses an even smaller (50%) version.
Options
Some of the parameters can be changed before starting the evolution. They are:- Number of polygons: The number of polygons used to in the image approximation.
- Polygon complexity: The number of vertices in each polygon.
- Difference squared: If checked, the squared differences of the RGB values are used when calculated fitness, otherwise simply the absolute difference.
- Population size: The number of different candidates in each generation.
- Succesful parents cutoff: The percentage of candidates selected for breeding the next generation, eg. 0.25 = the fittest 25% of the current population.
- Mutation chance: The chance that a value will mutate when breeding new candidates, for example 0.02 = 2% chance of mutation.
- Mutation amount: The amount the mutated value will be changed, for example 0.01 = 1% means a random change between -1% and 1%.
- Uniform crossover: If checked, values are mixed at random one by one from each parent, otherwise a single random cut in the DNA string is made and one part from each parent is used.
- Kill parents: If checked, the new generation will consist entirely of the children of the old generation. If not checked, the parents are left alive and will compete against their offspring.
If the parents are not carried over to the new generation, you will notice that the best fitness value in the new generation might actually be worse than the previous one. On the other hand, that could make it easier to avoid dead ends and premature converging towards local optima.
Note that changing the parameters won't have any effect until the evolution is restarted.
A few results
Here are a few quick runs, showing the results.
Mona Lisa after 25 minutes
Firefox logo after 25 minutes
Opera logo after 40 minutes
Mondrian after 14 minutes
Microbe after 9 minutes
Browsers
Since we're usingcanvas
there's no support for IE. Furthermore, we're using the getImageData method, so only Firefox, Opera 9.5+ and WebKit nightlies will work. I suggest using either the latest Firefox beta/preview or a recent WebKit nightly as they seem to yield the best performance.One last note
Only now, after I'd been playing around with this, have I noticed that someone had already done a JavaScript/canvas version of Alsing's program (where you can even use your own images) back when the original article was published, and for some reason I missed it. That version stays closer to what Alsing was doing, though, where mine differs in a lot of ways.I think my approach gets to something resembling the target image faster, but it seems to have problems getting the details in place after that. I haven't had the patience to let it run for more than a couple of hours and it's quite possible that the other techniques are able to get a better approximation in the long run.