torressa / cspy

A collection of algorithms for the (Resource) Constrained Shortest Path problem in Python / C++ / C#
https://torressa.github.io/cspy/
MIT License
77 stars 24 forks source link

PSOLGENT Local Best calculation wrong? #57

Closed Schorsch1705 closed 3 years ago

Schorsch1705 commented 3 years ago

For the local best value only one of the first three particles are taken

I am not good in Python and so I am not sure about this, but from what I can see in the source code and my examples: neighbourhood_size = 3

self.local_best = array([self.pos[argmin(self.fitness[bottom:top])]] * self.swarm_size)

The line above takes in that case 3 fitness values (depending on the iteration) and then takes from the minimum the index. But NOT the index from self.pos for this fitness value, but for me, this always leads to the index 0, 1 or 2, depending on which fitness value is the lowest. But then from the array self.pos the index 0, 1 or 2 is taken as local best value. Even if one has considered the fitness values with higher index.

Regardless of this, I don't quite understand the way to tie the neighborhood to the iteration. Should not actually (according to Marinakis) the neighborhood of each particle be considered separately? In the PSOLGENT variant this also increases depending on whether an improvement takes place or not. With a static neighborhood, this would then be the PSOLGNT variant. Please enlighten me and help me to find my thinking error :)

torressa commented 3 years ago

Hi @Schorsch1705 thanks for the issue!

I don't understand the index explanation. I took the calculation straight from the paper (equations 14 + 15), the argmin function ensures that the particle with minimum fitness (we are minimising) in the given neighbourhood are selected (this varies with the iteration as you said).

You are correct, PSOLGNT is more appropriate as the neighbourhood is indeed static. I missed that part from the paper, but after a second read it doesn't seem hard to implement, just requires the extra parameter for the maximum number of iterations (it_num in the paper), and the neighbourhood updates for each particle (as you mention)

I haven't used this algorithm in a while due to it's speed (instances solved in the paper are not that big), so will have a low priority. However, since you are new to Python, could be a nice exercise.

Schorsch1705 commented 3 years ago

Hi and thanks for the quick reply!

Let me explain this with an example: neighbourhood_size = 3 swarm_size = 100 max_iter = 1000

Iteration: 69 leads to bottom = 66 and top = 69 Then argmin(self.fitness[bottom:top]) leads to the 3 fitness values of particles 66, 67 and 68 and returns the index of the minimum. But not 66, 67 or 68, but 0, 1 or 2. And with that, the value from self.pos[] is then taken over. This results in the particle at position 0, 1 or 2 being regarded as the local best value. In every Iteration. Although particles 66, 67 or 68 actually represent the local best value.

And my understanding is that the i in formula 14 and 15 from the paper is not the iteration counter. The neighbourhood of each particle should actually be considered individually in each iteration. So iteration 69, local best of particle 10, is the minimum of particles 9, 10 and 11. Local best of particle 11 is the minimum of particles 10, 11 and 12, etc. Or am I completely wrong? If so, please excuse me.

Anyway, thanks again for the quick response!

torressa commented 3 years ago

Aha I understand now! You are correct, my bad, I was missing an indexing step. And again, I misunderstood the paper about the iteration number.

I've just pushed a fix. Thanks for the correction! :+1:

Schorsch1705 commented 3 years ago

Awesome! Thank you so much!