Closed LemonPi closed 1 year ago
Can you share a minimal example, i.e., including your usage of ask
and tell
?
Make sure you are passing to scheduler.tell
the same solution array as the one that you got from scheduler.ask
.
Here's how I use ask
and tell
; the minimal example might take a bit to extract (have a deadline right now so it'll only be possible after)
solutions = self.scheduler.ask_dqd()
bcs = self._measure(solutions)
# evaluate the models and record the objective and behavior
# note that objective is -cost
# get objective gradient and also the behavior gradient
x = ensure_tensor(self.device, self.dtype, solutions)
x.requires_grad = True
cost = self._f(x)
cost.sum().backward()
objective_grad = -x.grad.cpu().numpy()
objective = -cost.detach().cpu().numpy()
objective_grad = objective_grad.reshape(x.shape[0], 1, -1)
measure_grad = self._measure_grad(x)
jacobian = np.concatenate((objective_grad, measure_grad), axis=1)
self.scheduler.tell_dqd(objective, bcs, jacobian)
solutions = self.scheduler.ask()
# evaluate the models and record the objective and behavior
# note that objective is -cost
cost = self._f(x)
bcs = self._measure(solutions)
self.scheduler.tell(-cost.cpu().numpy(), bcs)
On a sidenote - is there anything wrong with the way I'm using dqd? I'm getting better QD exploration with the EvolutionStrategyEmitter
than GradientArboresenceEmitter
Shouldn't this line
cost = self._f(x)
be written as
cost = self._f(solutions)
at the end of your code segment?
x
is the tensor version of solutions
and self._f
acts on tensors
Yes but x
is the tensor version of solutions
from ask_dqd
, which you already used for tell_dqd
.
So you should redefine x
to be tensor of solutions
from ask
. Perhaps as follows:
...
solutions = self.scheduler.ask()
x = ensure_tensor(self.device, self.dtype, solutions)
x.requires_grad = True
# evaluate the models and record the objective and behavior
# note that objective is -cost
cost = self._f(x)
bcs = self._measure(solutions)
self.scheduler.tell(-cost.cpu().numpy(), bcs)
Let me know if this solves your issue or not.
Ah yes sorry, that was already the case; it was an error in my transcription/summarization - the actual later ask tell is correct.
Thanks @itsdawei for helping resolve this. @LemonPi let us know if you have any more questions.
I meant that the bug still persists, and that it was an error in my second post describing my code. In actuality it is the correct formulation as you mentioned below, so I didn't make any changes.
I see; apologies for misunderstanding. I took a closer look; can you check that your code has all the correct shapes, particularly for objective
, bcs
, and jacobian
in tell_dqd
? Also check the shape of x[i]
.
Also, what commit of pyribs are you using? I suggest pulling the latest commit on master
as we recently added a lot of shape checks to make it harder for these types of problems to occur.
Yeah, it seems like there is a shape mismatch between your objectives (which are used to generate the ranking_indices) and your solutions (which were generated by ask).
If you checked everything and it is all correct, could you provide me with the shape of all your arrays (solution
, bcs
, and jacobian
) for both tell_dqd
and tell
so I can simulate your code on my side and step through the program?
Description
Explicitly setting
bounds
on aGradientArboresenceEmitter
(9 x 2 np.array for a 9 dimensional solution space) leads to an indexing issue with stack trace:Error occurs on the line:
Where
My emitters are created with (self.num_emitters = 1):