Open peteroupc opened 4 years ago
I have made numerous updates to this document recently, with more algorithms and discussion.
Letting them know: @lmendo.
I have added the following open questions to the Bernoulli Factory Algorithms page. Any answers to these questions will greatly improve the article.
I request expressions of mathematical functions that can be expressed in any of the following ways:
All these expressions should not rely on floating-point arithmetic or the direct use of irrational constants (such as π or sqrt(2)), but may rely on rational arithmetic. For example, a series expansion that directly contains the constant π is not desired; however, a series expansion that converges to a fraction of π is.
In particular, I have found it surprisingly hard to find information and research on alternating power series and lower/upper bound approximations needed to apply certain Bernoulli factory algorithms, including Algorithm 3 of "Simulating Probabilities of Unknown Events via Reverse Time Martingales". Especially helpful would be information on how to transform an arbitrary function into an alternating series (preferably one that is easy to calculate) or another construct that satisfies these algorithms.
Letting these people know: @63EA13D5 .
Regarding the simulation of Euler's constant without floating-point arithmetic, I think I found a simple way to do that, as well as for other constants. Please see the attached paper (I just submitted it to arXiv, but it doesn't appear yet) arXiv
Thank you for your response. In the meantime, I found a rational interval arithmetic described in Daumas, M., Lester, D., Muñoz, C., "Verified Real Number Calculations: A Library for Interval Arithmetic", arXiv:0708.3721 [cs.MS], 2007, which computes upper and lower bounds of common functions, and I implemented this arithmetic in Python. However, as I found, even this rational interval arithmetic doesn't always lend itself to simple Bernoulli factory algorithms (and due to Python's highly slow Fraction class, I had to optimize some parts of the code), which is why I have avoided involving it in the hopes that an even simpler algorithm is possible without resorting to this kind of arithmetic.
Also, on your problem of "characteriz[ing] the minimum number of inputs that an arbitrary algorithm must use to transform 1/2-coins into a τ-coin", I am aware of the following paper by D. Kozen that relates to this: "Optimal Coin Flipping". By the results in that paper, for example, any algorithm of the kind relevant here will require at least 2 unbiased coin flips on average to simulate a coin of known bias τ, except when τ is a dyadic rational.
Thanks. The paper you link seems very interesting; I'll take a look. If the lower bound is 2, then my algorithm is close to optimum.
In my algorithm, and probably in others, computations can be reused if you are going to simulate the simulation several times (page 11 of my paper). This can increase speed.
I have noticed one error in your paper: In (16), "(2j-1)2j(2j+1)" should read "2(j-1)(2(j-1)+1)(2(j-1)+2)". Otherwise, the algorithm would simulate the wrong probability.
Also, for clarity, the "1/2" and "3/2" in the algorithm's pseudocode should be in parentheses.
Also, in the meantime, I have another open question:
This now leads to another question, which I don't yet classify as an open question, since I might check whether the algorithm I just learned today would be helpful: It would be nice to have a simple algorithm that simulates a probability of the form exp(-y), where y is a logarithm of a really huge number (or a sum of such logarithms), but whose integer part is not known in advance; one case of this was just given above. (Note that my page currently includes an algorithm for simulating exp(-z), but assumes z's integer part is known.) (I think I have an algorithm developed.)
REFERENCE:
K. Bringmann, F. Kuhn, et al., “Internal DLA: Efficient Simulation of a Physical Growth Model.” In: Proc. 41st International Colloquium on Automata, Languages, and Programming (ICALP'14), 2014.
Thanks for the correction. In my code I am using equation (15), so I hadn't noticed the mistake.
As for the other problems, I don't really know, sorry
The requests and open questions for all my articles are now in a dedicated page: Requests and Open Questions.
However, since this GitHub issue is about all aspects of Bernoulli factories, not just the ones mentioned in Requests and Open Questions, I will leave this issue open for a while.
I want to draw attention to my Supplemental Notes for Bernoulli Factory algorithms:
Especially the section "Approximation Schemes". It covers ways to build polynomial approximations** to the vast majority of functions for which the Bernoulli factory problem can be solved (also called factory functions). They include concave, convex, and twice differentiable functions. Now my goal is to find faster and more practical polynomial approximation schemes for these functions. Which is why I have the following open questions:
** See my Math Stack Exchange question for a formal statement of the kind of approximations that I mean.
I want to draw attention to my Supplemental Notes for Bernoulli Factory algorithms:
* https://peteroupc.github.io/bernsupp.html
Especially the section "Approximation Schemes". It covers ways to build polynomial approximations** to the vast majority of functions for which the Bernoulli factory problem can be solved (also called factory functions).
This was part of my previous comment.
However, these polynomial approximation methods don't ensure a finite expected running time in general.
I suspect that a running time with finite expectation isn't possible in general unless the residual probabilities formed by the polynomials are of order O(1/n^(2+epsilon)) for positive epsilon, which Holtz et al. ("New coins from old, smoothly", Constructive Approximation, 2011) proved is possible only if the function to be simulated is C2 continuous. I suspect this is so because sums of residual probabilities of the form O(1/n^2) don't converge, but such sums do converge when the order is O(1/n^(2+epsilon)). (By residual probabilities, I mean the probability P(N > n) given in Result 3, condition (v) of that paper.)
Thus I have several questions:
Other interesting questions:
I have collected all my questions on the Bernoulli factory problem in one place:
Issue opened to seek comments or suggestions related to my page on Bernoulli Factory algorithms, which are algorithms to turn coins biased one way into coins biased another way.
https://peteroupc.github.io/bernoulli.html
You can send comments on this document on this issues page. You are welcome to suggest additional Bernoulli factory algorithms, especially specific continued fraction expansions and series expansions for the general martingale and Mendo algorithms.