Making Your Own Strategy : A Simple EDA¶
As seen in the Covariance Matrix Adaptation Evolution Strategy example, the
eaGenerateUpdate()
algorithm is suitable for algorithms
learning the problem distribution from the population. Here we’ll cover how to
implement a strategy that generates individuals based on an updated sampling
function learnt from the sampled population.
Estimation of distribution¶
The basic concept concept behind EDA is to sample individuals
with a certain distribution and estimate the problem distribution from the
best individuals. This really simple concept adhere to the
generate-update logic. The strategy contains a random number generator which
is adapted from the population. The following
EDA
class do just that.
class EDA(object):
def __init__(self, centroid, sigma, mu, lambda_):
self.dim = len(centroid)
self.loc = numpy.array(centroid)
self.sigma = numpy.array(sigma)
self.lambda_ = lambda_
self.mu = mu
def generate(self, ind_init):
# Generate lambda_ individuals and put them into the provided class
arz = self.sigma * numpy.random.randn(self.lambda_, self.dim) + self.loc
return list(map(ind_init, arz))
def update(self, population):
# Sort individuals so the best is first
sorted_pop = sorted(population, key=attrgetter("fitness"), reverse=True)
# Compute the average of the mu best individuals
z = sorted_pop[:self.mu] - self.loc
avg = numpy.mean(z, axis=0)
# Adjust variances of the distribution
self.sigma = numpy.sqrt(numpy.sum((z - avg)**2, axis=0) / (self.mu - 1.0))
self.loc = self.loc + avg
A normal random number generator is initialized with a certain mean
(centroid) and standard deviation (sigma) for each
dimension. The generate()
method uses numpy to generate lambda_
sequences in dim dimensions, then the sequences are used to initialize
individuals of class given in the ind_init argument. Finally, the
update()
computes the average (centre) of the mu best individuals and
estimates the variance over all attributes of each individual. Once
update()
is called the distributions parameters are changed and a new
population can be generated.
Objects Needed¶
Two classes are needed, a minimization fitness and a individual that will
combine the fitness and the real values. Moreover, we will use
numpy.ndarray
as base class for our individuals.
creator.create("FitnessMin", base.Fitness, weights=(-1.0,))
creator.create("Individual", numpy.ndarray, fitness=creator.FitnessMin)
Operators¶
The eaGenerateUpdate()
algorithm requires to set in a
toolbox an evaluation function, an generation method and an update method.
We will use the method of an initialized EDA
. For the generate
method, we set the class that the individuals are transferred in to our
Individual
class containing a fitness.
def main():
N, LAMBDA = 30, 1000
MU = int(LAMBDA/4)
strategy = EDA(centroid=[5.0]*N, sigma=[5.0]*N, mu=MU, lambda_=LAMBDA)
toolbox = base.Toolbox()
toolbox.register("evaluate", benchmarks.rastrigin)
toolbox.register("generate", strategy.generate, creator.Individual)
toolbox.register("update", strategy.update)
# Numpy equality function (operators.eq) between two arrays returns the
# equality element wise, which raises an exception in the if similar()
# check of the hall of fame. Using a different equality function like
# numpy.array_equal or numpy.allclose solve this issue.
hof = tools.HallOfFame(1, similar=numpy.array_equal)
stats = tools.Statistics(lambda ind: ind.fitness.values)
stats.register("avg", numpy.mean)
stats.register("std", numpy.std)
stats.register("min", numpy.min)
stats.register("max", numpy.max)
algorithms.eaGenerateUpdate(toolbox, ngen=150, stats=stats, halloffame=hof)
return hof[0].fitness.values[0]
The complete examples/eda/fctmin.