Particle Swarm Optimization with Python

What is  PSO ? 

There are things in the nature we human seldom understand. We can either guess what is going to happen in the future or we can use Algorithms to remove the guess-work from the equation. PSO is one of those techniques. It is a routine designed to mimic Birds flocking or fish schooling . Sounds interesting right ? It is one of the wonders of science.It was developed Developed in 1995 by Eberhart and Kennedy.

Below, are the only two equations that make up a bare bones PSO algorithm. As a heads up, “k” references the current iteration, therefore “k+1″ implies the next iteration.

The Algorithm :

Uploaded by Ganesh K. Venayagamoorthy

In the 2 for loops, it initializes the positions of the particles with a random uniform distribution for all their dimensions within a permissible range.

After that, it calculates its fitness value for each particle and compared it to its own best position (the p best value has ever been the best position of that particular particle) and then it selects the best position of all particles in g best.

The Equation : Let’s take a closer look at the equation that defines the velocity of a particle dimension’s next iteration: Vi(k+1) is an inertial parameter of the next iteration velocity W. This parameter affects the propagation of the movement given by the last value of the velocity. C1 and C2 are coefficients of acceleration. The C1 value gives personal best value and the C2 value is the social best value. Pi is the best position for each individual and Pg is the best position for all particles. The distance of each of these parameters to the actual position of the particle is measured in the equation. Rand1 and rand2 are random numbers where 0 is within random range 1 and control each value’s influence: social and individual as shown below. After that the position of the new particle is calculated until specified number of iterations or an error criteria are reached.

Google Images

Implementation : 

We have to find the minimum point for a functions which in this case is f(x,y) = x^2 + y^2 + 1. 

So what would be the minimum for f(x,y) ?

Ans : [0,0]

Implementing in Python : 

Library used : Numpy 

The Particle class

As a particle is initiated we sort two positions , +50 and – 50 and the pbest_position is initiated with these. Pbest_position is the best individual position for the particle. For the minimum value pbest_value is initiated , _str_() is defined to print the actual position and the best individual value. The move() method add the positional vector and the dimensions’ velocity calculated in the searches .

Search space : 

Search Space controls algorithm routine. It is responsible to keep all the particles , Identify and  set the individuals best position values of all particles, manage the target error criteria, calculate the best global value and set the best global position. In resume, it encapsulate all principal steps .

Set_pbset and st_gbset goes through all the particles and compares them to the best individual position . The method move_particles calculate the new vector velocity to each particle in each dimension as it was explained before.

Main Loop : 

Search space is initiated with target 1 . This is the target at fitness value , in other words f(x,y) = 1 . So it finds the value of x and y that gives the result as 1 because we want to find the minimum.User can set the target error and number of particles. List generator will initiate all the particles once iterations are initiated .

Inside the interaction , the best individual and global best position will be found , to help identify the target error criteria. Until the minimum error is achieved the particle’s velocity to move will be calculated and it will stop at a new position .

Be cautious about c1, w and c2 .

Written by Masud Imran

Leave a comment

Your email address will not be published. Required fields are marked *