Open AllardJM opened 1 month ago
Looking into the source code, it seems that part of this issue is related to the following:
From the objective function:
spend = np.full(self.num_periods, budget)
spend_extended = np.concatenate([spend, np.zeros(self.adstock.l_max)])
The channel budget amount is being copied over to each period. It seems that if instead, the amount was divided by num_periods first, a better solution could be found. The way this is set up currently, I would think it much more likely to hit a saturation point on the saturation curve erroneously. In my case, where the period of consideration is 8 weeks long, putting the sum of a channels spend or impressions (the budget_bounds) on the curve as above, would be a significant extrapolation from the amounts per period the model was built considering.
Version: Dev (9/13/24).
I'm looking for some advice on what seems odd to me in terms of the "optimal" channel budget being returned. My data has only 4 channels. The fitted model gives the following median values for the beta coefficients
and it appears that for the "csi" channel, which has a substantially larger effect than the others, is far from completely saturated:
However, if a scenario is ran - I'm using the optimizer to make sure outputs are comparable by fixing the budget constraints- to the actual spends from the last 8 weeks of the training data:
we get an estimated outcome of the following with requiring exact values matching the last 8 weeks:
find_optimal_channel_spend(mmm, min_prop=1, max_prop = 1)
Now, if we allow for wider ranges, where the assumption would be that more "CSI" would be included and revenue expected to be increased. This is not the case though...
find_optimal_channel_spend(mmm, min_prop=0.5, max_prop = 1.5)
we get a suboptimal outcome:
The optimizer decreased the top performing channel and decreased expected revenue.
I see this playing around with various ranges of min_prop and max_prop and see the same inability to select better choices of media spend.