Closed austindowney closed 6 years ago
It might be helpful to see the whole error message with the reference to the line of code which threw the Value Error 😄.
Thanks for reporting this @tex-downey. I took a look at the problems and I think the first one is the same that is already addressed in #150.
Problem: Far from the optimum First of all, you use 10 particles these 10 particles are then uniformly distributed over the whole space. Let's say we take the bounds in the second example then we'd have a space of 10e22 x 10e22 square units, this results in huge distances between the particles and in insanely high "velocities". This might cause the particles to "fly" into the bounds where they don't move a lot.
This next one gives a result that far from the near zero optimum.
The optimization results are actually pretty "close" to the optimum in comparison to the bounds. Even if you would get a result around 10e5 you'd still be orders of magnitude below the boundary conditions. PSO scales with the particle space because you calculate the velocity based on the distance between the points which in this example is massive.
Also, you have to increase the number of iterations in such a vast space (just compare the results of, for example, bounds of 10 and -10 and bounds of 100 and -100 with 100 iterations each. The one with the bounds of 10 is orders of magnitude below the result of the one with bounds of 100).
@ljvmiranda921 maybe we should note the fact that PSO scales with the search space somewhere in the documentation so people don't get confused by these huge numbers they get out when they use large boundaries?
Problem: ValueError
And this last one returns the error "ValueError: operands could not be broadcast together with shapes (0,) (10,2) "
This is actually a result of an OverflowError
in the sphere_func
. The sphere function is implemented like so:
j = (x**2.0).sum(axis=1)
It throws the following error:
RuntimeWarning: overflow encountered in square
If you square a number near 10e200 you get around 10e400. The float maximum in Python is (in my Python distribution):
>>> sys.float_info.max
1.7976931348623157e+308
>>> 10e200**2
Traceback (most recent call last):
File "<pyshell#2>", line 1, in <module>
10e200**2
OverflowError: (34, 'Result too large')
Consequently, the squared number is way above the limit of Python itself.
@ljvmiranda921 we should include this one in the documentation as well. Or can we fix this with anything? I found this standard library module which supports alterable precision.
Hi @tex-downey , thank you for reporting this. Hope we can find a fix on this soon. We have #150 being worked on by a contributor and we're hoping that it can help with your problem. They actually mentioned via email that they encountered a problem wrt bounds with very large or very small numbers, so I'm guessing this has a relation to that.
options = {'c1':0.5, 'c2':0.3, 'w':0.9}
(arbitrarily set) actually works:2018-07-05 23:44:33,657 - pyswarms.single.global_best - INFO - Arguments Passed to Objective Function: {}
2018-07-05 23:44:33,686 - pyswarms.single.global_best - INFO - Iteration 1/1000, cost: 38522.37524909945
2018-07-05 23:44:33,705 - pyswarms.single.global_best - INFO - Iteration 101/1000, cost: 0.027008938644285944
2018-07-05 23:44:33,715 - pyswarms.single.global_best - INFO - Iteration 201/1000, cost: 3.9818213529235787e-07
2018-07-05 23:44:33,726 - pyswarms.single.global_best - INFO - Iteration 301/1000, cost: 5.533968978819569e-10
2018-07-05 23:44:33,736 - pyswarms.single.global_best - INFO - Iteration 401/1000, cost: 7.95962484076589e-13
2018-07-05 23:44:33,746 - pyswarms.single.global_best - INFO - Iteration 501/1000, cost: 1.96989936025607e-18
2018-07-05 23:44:33,759 - pyswarms.single.global_best - INFO - Iteration 601/1000, cost: 9.483864040021896e-23
2018-07-05 23:44:33,769 - pyswarms.single.global_best - INFO - Iteration 701/1000, cost: 3.4722275916531227e-25
2018-07-05 23:44:33,779 - pyswarms.single.global_best - INFO - Iteration 801/1000, cost: 3.467879498721964e-28
2018-07-05 23:44:33,792 - pyswarms.single.global_best - INFO - Iteration 901/1000, cost: 1.5093798533157134e-31
2018-07-05 23:44:33,803 - pyswarms.single.global_best - INFO - ================================
Optimization finished!
Final cost: 0.0000
Best value: [-1.3130195466568111e-18, -1.7853829485533862e-19]
Although I'm aware that there may be weird behaviors when given very large and very small values, maybe your problem can be solved by tweaking on the optimizer parameters? (Not enough exploration, very low inertia, etc.)
inf
). The traceback looks like the following:---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
<ipython-input-23-ddc1d5f780a4> in <module>()
----> 1 cost, pos = optimizer.optimize(sphere_func, print_step=100, iters=1000, verbose=3)
~/Documents/Dev/pyswarms/pyswarms/single/global_best.py in optimize(self, objective_func, iters, print_step, verbose, **kwargs)
199 # Perform velocity and position updates
200 self.swarm.velocity = self.top.compute_velocity(
--> 201 self.swarm, self.velocity_clamp
202 )
203 self.swarm.position = self.top.compute_position(
~/Documents/Dev/pyswarms/pyswarms/backend/topology/star.py in compute_velocity(self, swarm, clamp)
108 Updated velocity matrix
109 """
--> 110 return ops.compute_velocity(swarm, clamp)
111
112 def compute_position(self, swarm, bounds=None):
~/Documents/Dev/pyswarms/pyswarms/backend/operators.py in compute_velocity(swarm, clamp)
124 c2
125 * np.random.uniform(0, 1, swarm_size)
--> 126 * (swarm.best_pos - swarm.position)
127 )
128 # Compute temp velocity (subject to clamping if possible)
ValueError: operands could not be broadcast together with shapes (0,) (10,2)
I guess the culprit here would be this line on global_best
:
if np.min(self.swarm.pbest_cost) < self.swarm.best_cost:
self.swarm.best_pos, self.swarm.best_cost = self.top.compute_gbest(self.swarm)
pbest_cost
that is smaller than best_cost
, then best_pos
(defaults to empty array when initialized) is never updated. (0,)
as the shape of swarm.best_pos
. Because we're subtracting from an empty array.pbest_cost
is of inf
value, and the best_cost
is defaulted to inf
. The comparison returns False
, and compute_gbest
is never executed. Thus, the default best_pos
(i.e., []
) is not updated. Maybe a hacky fix can be done on this while waiting for #150 . I'll also try to think of something next week.
Hi @whzup , thanks for replying. Just to answer your queries:
maybe we should note the fact that PSO scales with the search space somewhere in the documentation so people don't get confused by these huge numbers they get out when they use large boundaries?
Gotcha. I'll add it on #154. Let's check first if #150 or my solution above solves this, but noted.
we should include this one in the documentation as well. Or can we fix this with anything? I found this standard library module which supports alterable precision.
Sure. Let's try a precision fix on v.0.4.0
No problem 😋. @ljvmiranda921 what do you think about the proposal of @tex-downey to have a feature to make it possible to only consider positive (or negative) numbers?
what do you think about the proposal of @tex-downey to have a feature to make it possible to only consider positive (or negative) numbers?
Might be related to precision-fix. Still not sure how we can handle inf
s. Let's do this on v.0.4.0
Describe the bug If I try to apply a large upper bound to my problem, I no longer get the same optimal solution I was getting. I would think that the bounds would only come into play if the particle reaches the bound and therefore should not affect the solution to much.
Also, if the bounds are large enough PySwarms returns the error "ValueError: operands could not be broadcast together with shapes (0,) (10,2)"
Here are some examples, using the basic example provided. This first example works fine:
This next one gives a result that far from the near zero optimum.
And this last one returns the error "ValueError: operands could not be broadcast together with shapes (0,) (10,2) "
Maybe I am misunderstanding what is meant by the term bounds, but I don't think it should behave like this. Also, a nice feature would be to allow the user to set an infinite upper bound say, (0,inf) in the case that they only want to consider positive numbers.