Has there been gains achieved by using more complex genome models in genetic algorithms?

In the limited work I've done with GAs, the chromosome was essentially a mapping of all the possible states in the system, encoding an "action" to take given that state. There were a fixed number of actions. This was for a "robot picking up cans in a room" scenario (as described in Melanie Mitchell's book on Complex Systems).

The state was an encoding of: is there a wall in N, S, E, W, and/or a can N, S, E, W. This worked out to be about 240 different states. The chromosome was then a lookup table of what to do in each of those states: move N, S, E, W, or try to pick up a can. I added "move randomly" to make it more interesting.

In that particular case, the only meaningful way to have bigger/more chromosomes would be to increase the state space. Maybe instead of a von Neuman Neighborhood (4 neighbors), make it a Moore Neighborhood (8 neighbors) - and then add the ability to move diagonally. Or maybe allow the robot to see further than the 1 blocks nearest it.

So by all that, I mean that I don't think in a GA problem you can just arbitrarily add more chromosome complexity without it having an actual meaning for your problem.

Once you have that, you need some way to determine the fitness of a given chromosome. Maybe this is the magnitude of an error term for a regression, but in my case it was a score the robot got using that chromosome as a "program".

After that, you're trying to manipulate the chromosome to optimize the fitness function. This is like other optimization techniques, such as gradient descent, simulated annealing, etc. In the case, the tools of adjustment are different. Typically you have things like: random mutation (e.g. radiation in the environment), crossover, ordered pairing (better performers get a better chance to "mate"). You can even do things like elitism where you just keep the best n chromosomes from one generation as-is and take them into the next generation.

In my work, I cited this: Obitko , Marek. Introduction to Genetic Algorithms (1998) http://www.obitko.com/tutorials/genetic-algorithms/index.php and particularly: http://www.obitko.com/tutorials/genetic-algorithms/crossover-mutation.php http://www.obitko.com/tutorials/genetic-algorithms/selection.php

He talks about cross-over, but it's not exactly analogous to living beings because, at least in GAs I've seen, there is just one sequence, not a helix/pair of sequences to do a straight-up meiosis operation on. The crossing-over is between pairs of "individuals".

In the end, there's nothing particular special about GAs. They're just one way to explore a solution space in an optimizing way. They have advantages compared to other methods: for example, gradient descent requires that your solution space is somehow differentiable, where GAs can explore pretty much any space. On the other hand, GAs can be very slow to converge on a good solution and there's not a good way to tell if you've found a global solution or a local optima.

/r/compsci Thread Parent