Closed peterfakhry closed 5 years ago
Hello @peterfakhry and welcome to PySwarms! π Please excuse the long waiting time, as you might have guessed the project is a bit inactive at the moment (this should change in the near future though). Thank you for your question! Do you mind sharing the code you tried to run and shortly comment on exactly what part of it does not work? It may also be helpful to see what adjustments you've tried so far.
Thank you Sir and sorry for the late reply :D . i am sorry my code is long and must read other external files to run, however this is a pseudo code of what i am trying to do:
no_of_particles = 50
iterations = 4
PSO_hist = []
def control(swarm):
"""Control method as objective function Must take, as an input, a swarm's position vector of shape (n_particles, dimensions) and must return a numpy.ndarray of shape (n_particles, )"""
# Get the number of particles in the swarm
swarm = np.array(swarm,dtype=float)
n_particles = swarm.shape[0]
fitness = np.zeros(n_particles)
for i in range(n_particles):
print("Iteration no. = " + np.str(len(PSO_hist)+1))
print('Particle no. = ' + np.str(i+1))
try:
Q, R = Func1(swarm[i,:])
fitness[i] = Func2(Q, R)
**except ValueError:
while True:
print('-- failed particle')
swarm[i,:] = swarm[i,:] + + np.random.rand(1)
try:
Q, R = Func1(swarm[i,:])
fitness[i] = Func2(Q, R)
break
except ValueError:
pass**
PSO_hist.append([fitness, swarm])
#Return a numpy.ndarray so that shape attribute is present
return np.array(fitness)
max_b = np.array((1,1e-10))
min_b = 0
def PSO_R_Q(no_of_particles, iterations):
print("=> PSO is running......")
max_bound = max_b[0]*np.ones(80)
max_bound[0:20] = max_b[1]
min_bound = min_b*np.ones(80)
bounds = (min_bound,max_bound)
options = {'c1':0.5, 'c2':0.3, 'w':0.9}
optimizer = ps.single.GlobalBestPSO(n_particles=no_of_particles, dimensions=np.size(max_bound,0), options=options, bounds=bounds)
cost, pos = optimizer.optimize(control, print_step=1, iters=np.int(iterations/2), verbose=1)
return cost, pos
-- as you can see we modified the faulty particle position by a random number between 0 to 1 and try again the Func1 until (while true thing) it is not faulty and it gives us a fitness. Although that i am recording this correct particle position in the list PSO_hist but the original swarm (which the Pyswarms provide) is not adjusted. We need to adjust the original swarm rather than just modifying it inside the objective function -- also Pyswarms duplicates the no of iterations (i dont know why) as you can see, i am always dividing the no of iterations by 2 thank you
Thanks for sharing a part of your code @peterfakhry! Although, I'm not quite sure why you get a ValueError
to be honest. Could you maybe also share the exact error message?
Regarding the swarm modification. I would recommend that you implement your own optimization loop if you really need it (see here for a reference on how the GlobalBest
looks like and here for an easy reference on how to implement a custom optimization loop), There you can easily access the swarm in every iteration.
Also, could you maybe elaborate on your comment about the duplication of iterations? How did you notice it?
Hello whzup, 1) i get a ValueError because Func1 can sometimes raise this error thats why i am using PSO to find a stable optimum solution (position). The error is not regarding any of Pyswarms 2) I will look for custom loops, however is there any chance we can make a small modification in the pyswarms code itself because i am a beginner at coding with python? 3) Noticed duplicating number of iterations because upon printing the number of iterations i found double the number (i think there is an issue here also elaborating on the same question) 4) can you please help me because lately the swarm is not moving at all whatever parameters i change, the swarm does not move at all during iterations thank you so much for your help
Here i tried modifying "global_best.py", but with no effect on my code (i am only copying the part where the modification occurs from line 162)
for i in range(iters):
# Compute cost for current position and personal best
try:
self.swarm.current_cost = objective_func(self.swarm.position, **kwargs)
except ValueError:
while True:
try:
self.swarm.position = self.swarm.position*0.95
self.swarm.current_cost = objective_func(self.swarm.position, **kwargs)
except ValueError:
pass
self.swarm.pbest_cost = objective_func(self.swarm.pbest_pos, **kwargs)
self.swarm.pbest_pos, self.swarm.pbest_cost = compute_pbest(
self.swarm
)
best_cost_yet_found = self.swarm.best_cost
. . . . and so on
@peterfakhry Thanks for the further information. It might be the case that your objective function has an extremely steep gradient which in turn might lead to huge velocities. The problem is that if you have high velocities the particles may "crash" into the boundaries and not move at all (we're currently fixing this in a PR).
Let's check if this is the case. Could you print the history of the optimizer by using the print(optimizier.pos_history)
attribute?
yes, here all iterations have the same position POS_hist.txt it is a 4 iterations (5x75) swarm
Ok, what bounds did you use for this one? I saw that you used 1e-10
as the bounds for the first 20 dimensions of your optimization in your code example above. At the moment the optimization loop just omits replacing particles when they hit a border until their velocity places them inside of the search space. I don't see something else as the culprit for this behaviour at the moment π.
PS: Excuse my long response times. I have exams at the moment π
Hey @peterfakhry, we implemented the handlers for the boundaries now. Try updating your pyswarms version and rerun your code. I'm wondering whether this solved your issue as well.
hi whzup, thank you so much for your reply, hope you did well in your exams. i actually noticed that the swarm begin to move when updating to 0.4.0 however: 1) what do you mean by handlers: velocity clamper, inertia.... ?! 2) you mean you implemented this inside the code, so i do not have to modify anything? 3) if i want to modify my particle position within the iterations should i use optimizer.position and assign it to a desired position (like when we initialize the swarm)?! because i tried modifying a certain particle position and then assign the whole swarm position as optimizer.position = my_swarm_new position and did that through the iterations. 4) can we update inertia through iterations?! thanks a lot bro
@peterfakhry No problem, I am here to help π.
The handlers are two new classes in the backend of pyswarms. Their task is to handle particles that would surpass the boundaries. They are built into the optimizers, though, so you don't have to invoke them yourself. The only thing you can play around with are the strategies (the bh_strategy and vh_strategy). Here an example with your code above (I recommend using this combination of strategies):
optimizer = ps.single.GlobalBestPSO(n_particles=no_of_particles, dimensions=np.size(max_bound,0), options=options, bounds=bounds, bh_strategy="nearest", vh_strategy="invert")
See my answer for 1. π
3) Updating optimizer.position
does not affect the particles while the optimization runs so I don't think that's what you want. So now that the swarm moves maybe try again the modification of the GlobalBest
you tried earlier? You really have to modify the positions inside of the GlobalBest.optimize
method I can't think of a way to do this from the outside.
You can update the inertia but again this requires a modification of the GlobalBest
. The inertia is stored in the options attribute of the GlobalBest
so you can do the following inside of the optimization loop (for
loop in the optimize
method):
self.options["w"] = my_inertia_update(arguments)
where my_inertia_update
is your inertia update function π.
Hope this works now π€
hey, sorry for late reply ok, so i modified the file global_best.py (Lib/site-packages/pyswarms) from the ones in the picture to this:
for i in self.rep.pbar(iters, self.name):
if not fast:
sleep(0.01)
# Compute cost for current position and personal best
# fmt: off
try:
self.swarm.current_cost = objective_func(self.swarm.position, **kwargs)
except ValueError:
while True:
print('-- failed swarm')
print(self.swarm.position)
self.swarm.position = self.swarm.position*0.95
try:
self.swarm.current_cost = objective_func(self.swarm.position, **kwargs)
break
except ValueError:
pass
self.swarm.pbest_cost = objective_func(self.swarm.pbest_pos, **kwargs)
self.swarm.pbest_pos, self.swarm.pbest_cost = compute_pbest(self.swarm)
and test. but there are some comments i have, if you please help me on them: 1) the above modification alters the whole swarm (not just the failed particle) which resulted that if a particle is faulty we modify the whole swarm and repeat the iteration (this resulted that i ended up trapped in iteration no. 1) :D. can i access this specific particle only which raises the error?! 2) when trying to run again i got this error and it persisted (i dont know what happened) TypeError: super(type, obj): obj must be an instance or subtype of type 3) all i want is that, the global swarm feel my update for the faulty particle and not to punish the whole swarm. As if it is going with the global swarm to my code and try it. when a particle fails it gets modified (in my code) and finally returns back with a modified swarm
thank you
If you have any way to tell if a particular particle is faulty you can just access it by indexing. The position of the swarm is just a numpy.array
. For multiple faulty particles you may use numpy array slicing.
That's an inheritance error. Could you please show the whole traceback so I can see where the error stems from? π€
Yes, as explained in 1. you can just access the faulty particles by indexing if you can tell which one you have to adjust π
ok, so : 1) you mean that we can form a nested loop on the number of particles like 'j' for example and evaluate objective function for every particle but not the whole swarm (like modifying the whole 'i' loop) like self.swarm.current_cost[j] = objective_func(self.swarm.position[j], **kwargs) and so on...... I mean for every i iteration do j nested iterations for every particle and delete the loop from my objective function ?! and if it is correct, how that will affect the structure of the array regarding their sizes?! 2) this is the error:
File "C:\Users\Peter\Anaconda3\lib\site-packages\pyswarms\single\global_best.py", line 124, in __init__
super(GlobalBestPSO, self).__init__(
TypeError: super(type, obj): obj must be an instance or subtype of type
thank you so much
@peterfakhry
Yes, nesting should be possible. I beg your pardon, what do you mean by "affect the structure of the array"? What your objective function should return? If so, I guess it should suffice to return a ( ,dimension)
shaped array. But you have to stack the processed arrays together to have a 2d-array with the costs (the swarm needs this).
That's weird. π€ Are you using Python 3? I don't get any error when running an optimization with the GlobalBestPSO.
By the way, you can also create your own GlobalBestPSO
class and include it in your code instead of changing the source code.
hi sir 1) my objective function takes the swarm as an input and returns a single number (cost) for each particle, so it returns a 1-D array of fitness values whose size is the number of particles (,n_particles) and not (,dimension). lets stop here a bit,,, is this is correct??! why did you say it should return an array of number of dimensions. cant understand this " But you have to stack the processed arrays together to have a 2d-array with the costs (the swarm needs this)."
2) what i meant by "the structure of arrays" is their sizes. Now when i change the calculation to be for a single particle and not the swarm, wouldn't that affect the mathematical operations ?!
3) when i adjusted the global_best.py and my code for the first time, it runs but when trying to run it again this error appeared. thank you
Yes you are right, excuse me I was somewhere else with my mind π
You have to return a single cost value and after your nested loop you have to stack (e.g. np.stack
) all these cost values in an np.array
such that you can assign it to the swarm's attribute like here. Since you are diving really deep into the pyswarms
API with your custom optimization loop, maybe it would do on harm if studied the code base carefully to see what you have to adjust such that your code works.
This is what I mean by stacking the arrays. You can see that swarm.current_cost
needs to be assigned an np.array
thus you have to adjust your objective function accordingly.
Hmm, that's weird did you change anything else than screenshot you posted earlier? What python version are you using?
ok sir, i almost got a heart attack when you first commented π.i shall spend some time the coming days trying to produce my custom loop. excuse me, but i will keep you updated if any problems happen. thanks a lot for your precious help, i really appreciate it ππΌπ
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
Hey, @peterfakhry have you resolved your problem yet?
Is your feature request related to a problem? Please describe. the problem is that some of my functions depending on the swarm may call a 'ValueError' (it is very frequent) and stop the whole iteration which results in stopping the optimization process. what i am doing as a workaround is 'try' the function and if a 'ValueError' appears, we move the swarm position very slightly until the 'ValueError' disappears and then compute fitness. This is done inside the objective function but the original swarm is not updated with my adjustments. This resulted in a Final position that is not real and got me a 'ValueError' upon testing it.
Describe the solution you'd like How to update the original swarm position with the adjustments that we make in the objective function
Describe alternatives you've considered I tried declaring swarm as global within the objective function but that did not help
Additional context Add any other context or screenshots about the feature request here.