artofscience / SAOR

Sequential Approximate Optimization Repository
GNU General Public License v3.0
5 stars 1 forks source link

Slightly change broadcast formulation #44

Closed MaxvdKolk closed 3 years ago

MaxvdKolk commented 3 years ago

As follow up on #43:

Still uses broadcasting to prevent recomputations, but slightly less verbose statement with the same outcome.

@Giannis1993 how about doing it like this?

MaxvdKolk commented 3 years ago

Compared to the loops, there is now no repeated evaluation of the self.lin.y functions, but the readability might be a bit improved compared to the following?

y[self.positive] = self.lin.y(np.broadcast_to(x, self.positive.shape)[self.positive])
y[~self.positive] = self.rec.y(np.broadcast_to(x, self.positive.shape)[~self.positive])

The current case evaluates self.lin.y(x) and then broadcasts to the shape of y[self.positive], which becomes a flattened array with number of entries equal to the Trues in self.positive, so we can broadcast to self.positive.sum(). For the negative entries, we can then broadcast to (~self.positive).sum().

I feel that swapping the order, so putting the np.broadcast on the "outside", might be a bit more intuitive to read, but that's probably subjective. Just let me know what you think and I can make the other intervening variables consistent with this one (so maybe we wait with merging this one till adding more commits in case we decide to go with this 🙃)

MaxvdKolk commented 3 years ago

Maybe now thinking about it, I am not sure if this form actually works as intended for other cases 😅 I will have to think about it more.

MaxvdKolk commented 3 years ago

Yeah, never mind about this pull request. I think this only works when the self.positive = df > 0 ends up exactly the right shape to perform the broadcasting. Once that does not hold anymore, it will fail with a shape error.

Giannis1993 commented 3 years ago

yeah i was trying to understand how the broadcasting to self.positive.sum() worked, but it was giving me errors.