At the moment, the distributions with which valuations are sampled in randomized operations depends on the structure of the BDD. Instead, we should adjust the algorithm to ensure that the sampling is uniform across the valuations of the BDD.
Currently, the randomized operations use a "fair coin" to pick a variable value (i.e. each pick is equally likely). This over-represents results from branches with few valuations. To make this "uniform" across all valuations, the coin should be biased based on the number of valuations in each branch. We can easily compute this number of valuations (it's the same algorithm as for the total cardinality number). We just need to bias the coin correctly based on this number.
Nevertheless... it could be interesting to also provide different sampling strategies. Because sometimes, we don't actually want to be uniform. For example, we might rather sample from the more "extreme" values...
At the moment, the distributions with which valuations are sampled in randomized operations depends on the structure of the BDD. Instead, we should adjust the algorithm to ensure that the sampling is uniform across the valuations of the BDD.
Currently, the randomized operations use a "fair coin" to pick a variable value (i.e. each pick is equally likely). This over-represents results from branches with few valuations. To make this "uniform" across all valuations, the coin should be biased based on the number of valuations in each branch. We can easily compute this number of valuations (it's the same algorithm as for the total cardinality number). We just need to bias the coin correctly based on this number.
Nevertheless... it could be interesting to also provide different sampling strategies. Because sometimes, we don't actually want to be uniform. For example, we might rather sample from the more "extreme" values...