# Supplemental Notes for Bernoulli Factory Algorithms

Peter Occil

## General Factory Functions

As a reminder, the Bernoulli factory problem is: We're given a coin that shows heads with an unknown probability, λ, and the goal is to use that coin (and possibly also a fair coin) to build a "new" coin that shows heads with a probability that depends on λ, call it f(λ).

The algorithms for general factory functions, described in my main article on Bernoulli factory algorithms, work by building randomized upper and lower bounds for a function f(λ), based on flips of the input coin. Roughly speaking, the algorithms work as follows:

1. Generate a uniform(0, 1) random variate, U.
2. Flip the input coin, then build an upper and lower bound for f(λ), based on the outcomes of the flips so far.
3. If U is less than or equal to the lower bound, return 1. If U is greater than the upper bound, return 0. Otherwise, go to step 2.

These randomized upper and lower bounds come from two sequences of polynomials: one approaches the function f(λ) from above, the other from below, where f is a function for which the Bernoulli factory problem can be solved. (These two sequences form a so-called approximation scheme for f.) One requirement for these algorithms to work correctly is called the consistency requirement:

The difference—

• between the degree-(n−1) upper polynomial and the degree-n upper polynomial, and
• between the degree-n lower polynomial and the degree-(n−1) lower polynomial,

must have non-negative coefficients, once the polynomials are elevated to degree n and rewritten in Bernstein form.

The consistency requirement ensures that the upper polynomials "decrease" and the lower polynomials "increase". Unfortunately, the reverse is not true in general; even if the upper polynomials "decrease" and the lower polynomials "increase" to f, this does not mean that the scheme will ensure consistency. Examples of this fact are shown in the section "Schemes That Don't Work" later in this document.

In this document, fbelow(n, k) and fabove(n, k) mean the kth coefficient for the lower or upper degree-n polynomial in Bernstein form, respectively, where k is an integer in the interval [0, n].

### Approximation Schemes

A factory function f(λ) is a function for which the Bernoulli factory problem can be solved (see "About Bernoulli Factories"). The following are approximation schemes for f if it belongs to one of certain classes of factory functions. It would be helpful to plot the desired function f using a computer algebra system to see if it belongs to any of the classes of functions described below.

Concave functions. If f is known to be concave in the interval [0, 1] (which roughly means that its rate of growth there never goes up), then fbelow(n, k) can equal f(k/n), thanks to Jensen's inequality.

Convex functions. If f is known to be convex in the interval [0, 1] (which roughly means that its rate of growth there never goes down), then fabove(n, k) can equal f(k/n), thanks to Jensen's inequality. One example is f(λ) = exp(−λ/4).

Twice differentiable functions. The following method, proved in the appendix, implements fabove and fbelow if f(λ)—

• has a defined "slope-of-slope" everywhere in [0, 1] (that is, the function is twice differentiable there), and
• in the interval [0, 1]—
• has a minimum of greater than 0 and a maximum of less than 1, or
• is convex and has a minimum of greater than 0, or
• is concave and has a maximum of less than 1.

Let m be an upper bound of the highest value of abs(f′′(x)) for any x in [0, 1], where f′′ is the "slope-of-slope" function of f. Then for every integer n that's a power of 2:

• fbelow(n, k) = f(k/n) if f is concave; otherwise, min(fbelow(4,0), fbelow(4,1), ..., fbelow(4,4)) if n < 4; otherwise, f(k/n) − m/(7*n).
• fabove(n, k) = f(k/n) if f is convex; otherwise, max(fabove(4,0), fabove(4,1), ..., fabove(4,4)) if n < 4; otherwise, f(k/n) + m/(7*n).

My GitHub repository includes SymPy code for a method, `c2params`, to calculate the necessary values for m and the bounds of these polynomials, given f.

Note: For this method, the "slope-of-slope" function need not be continuous (Y. Peres, pers. comm., 2021).

Hölder and Lipschitz continuous functions. I have found a way to extend the results of Nacu and Peres (2005)(1) to certain functions with a slope that tends to a vertical slope. The following scheme, proved in the appendix, implements fabove and fbelow if f(λ)—

• is α-Hölder continuous in [0, 1], meaning its vertical slopes there, if any, are no "steeper" than that of m*λα, for some number m greater than 0 (the Hölder constant) and for some α in the interval (0, 1], and
• in the interval [0, 1]—
• has a minimum of greater than 0 and a maximum of less than 1, or
• is convex and has a minimum of greater than 0, or
• is concave and has a maximum of less than 1.

If f in [0, 1] has a defined slope at all points or "almost everywhere"(2), and does not tend to a vertical slope anywhere, then f is Lipschitz continuous, α is 1, and m is the highest absolute value of the function's "slope". Otherwise, finding m for a given α is non-trivial and it requires knowing where f's vertical slopes are, among other things.(3) But assuming m and α are known, then for every integer n that's a power of 2:

• D(n) = m*(2/7)α/2/((2α/2−1)*nα/2).
• fbelow(n, k) = f(k/n) if f is concave; otherwise, min(fbelow(4,0), fbelow(4,1), ..., fbelow(4,4)) if n < 4; otherwise, f(k/n) − D(n).
• fabove(n, k) = f(k/n) if f is convex; otherwise, max(fabove(4,0), fabove(4,1), ..., fabove(4,4)) if n < 4; otherwise, f(k/n) + D(n).

Note:

1. Some functions f are not α-Hölder continuous for any α greater than 0. These functions have a slope that's steeper than every "nth" root, and can't be handled by this method. One example is f(λ) = 1/10 if λ is 0 and −1/(2*ln(λ/2)) + 1/10 otherwise, which has a slope near 0 that's steeper than every "nth" root.
2. In the Lipschitz case (α = 1), D(n) can be m*322613/(250000*sqrt(n)), which is an upper bound.
3. In the case α = 1/2, D(n) can be m*154563/(40000*n1/4), which is an upper bound.

Certain functions that equal 0 at 0. This approach involves transforming the function f so that it no longer equals 0 at the point 0. This can be done by dividing f by a function (h(λ)) that "dominates" f at every point in the interval [0, 1]. Unlike for the original function, there might be an approximation scheme described earlier in this section for the transformed function.

More specifically, h(λ) must meet the following requirements:

• h(λ) is continuous on the closed interval [0, 1].
• h(0) = 0. (This is required to ensure correctness in case λ is 0.)
• 1 ≥ h(1) ≥ f(1) ≥ 0.
• 1 > h(λ) > f(λ) > 0 for every λ in the open interval (0, 1).
• If f(1) = 0, then h(1) = 0. (This is required to ensure correctness in case λ is 1.)

Also, h should be a function with a simple Bernoulli factory algorithm. For example, h can be a polynomial in Bernstein form of degree n whose n plus one coefficients are [0, 1, 1, ..., 1]. This polynomial is easy to simulate using the algorithms from the section "Certain Polynomials".

The algorithm is now described.

Let g(λ) = limνλ f(ν)/h(ν) (in other words, the value that f(ν)/h(ν) approaches as ν approaches λ.) If—

• f(0) = 0 and f(1) < 1, and
• g(λ) is continuous on [0, 1] and belongs in one of the classes of functions given earlier,

then f can be simulated using the following algorithm:

1. Run a Bernoulli factory algorithm for h. If the call returns 0, return 0. (For example, if h(λ) = λ, then this step amounts to the following: "Flip the input coin. If it returns 0, return 0.")
2. Run one of the general factory function algorithms for g(.), and return the result of that algorithm. This involves building polynomials that converge to g(.), as described earlier in this section. (Alternatively, if g is easy to simulate, instead run another Bernoulli factory algorithm for g and return the result of that algorithm.)

Notes:

1. It may happen that g(0) = 0. In this case, step 2 of this algorithm can involve running this algorithm again, but with new g and h functions that are found based on the current g function. See the second example below.
2. If—

• f is monotonically increasing,
• h(λ) = λ, and
• f′(λ), the "slope" function of f, is continuous on [0, 1], maps (0, 1) to (0, 1), and belongs in one of the classes of functions given earlier,

then step 2 can be implemented by taking g as f′, except: (A) a uniform(0, 1) random variate u is generated at the start of the step; (B) instead of flipping the input coin as normal during that step, a different coin is flipped that does the following: "Flip the input coin, then sample from the number u. Return 1 if both the call and the flip return 1, and return 0 otherwise."
This is the "integral method" of Flajolet et al. (2010)(4) (the modified step 2 simulates 1/λ times the integral of f.).

Examples:

1. If f(λ) = (sinh(λ)+cosh(λ)−1)/4, then f is bounded from above by h(λ) = λ, so g(λ) is 1/4 if λ = 0, and (exp(λ) − 1)/(4*λ) otherwise. The following SymPy code computes this example: `fx = (sinh(x)+cosh(x)-1)/4; h = x; pprint(Piecewise((limit(fx/h,x,0), Eq(x,0)), ((fx/h).simplify(), True)))`.
2. If f(λ) = cosh(λ) − 1, then f is bounded from above by h(λ) = λ, so g(λ) is 0 if λ = 0, and (cosh(λ)−1)/λ otherwise. Since g(0) = 0, we find new functions g and h based on the current g. The current g is bounded from above by H(λ) = λ*3*(2−λ)/5 (a degree-2 polynomial that in Bernstein form has coefficients [0, 6/10, 6/10]), so G(λ) = 5/12 if λ = 0, and −(5*cosh(λ) − 5)/(3*λ2*(λ−2)) otherwise. G is bounded away from 0 and 1, so we have the following algorithm:

1. (Simulate h.) Flip the input coin. If it returns 0, return 0.
2. (Simulate H.) Flip the input coin twice. If both flips return 0, return 0. Otherwise, with probability 4/10 (that is, 1 minus 6/10), return 0.
3. Run a Bernoulli factory algorithm for G (which involves building polynomials that converge to G, noticing that G is twice differentiable) and return the result of that algorithm.

Certain functions that equal 0 at 0 and 1 at 1. Let f, g, and h be functions as defined earlier, except that f(0) = 0 and f(1) = 1. Define the following additional functions:

• ω(λ) is a function that meets the following requirements:
• ω(λ) is continuous on the closed interval [0, 1].
• ω(0) = 0 and ω(1) = 1.
• 1 > f(λ) > ω(λ) > 0 for every λ in the open interval (0, 1).
• q(λ) = limνλ ω(ν)/h(ν).
• r(λ) = limνλ (1−g(ν))/(1−q(ν)).

Roughly speaking, ω is a function that bounds f from below, just as h bounds f from above. ω should be a function with a simple Bernoulli factory algorithm, such as a polynomial in Bernstein form. If both ω and h are polynomials of the same degree, q will be a rational function with a relatively simple Bernoulli factory algorithm (see "Certain Rational Functions").

Now, if r(λ) is continuous on [0, 1] and belongs in one of the classes of functions given earlier, then f can be simulated using the following algorithm:

1. Run a Bernoulli factory algorithm for h. If the call returns 0, return 0. (For example, if h(λ) = λ, then this step amounts to the following: "Flip the input coin. If it returns 0, return 0.")
2. Run a Bernoulli factory algorithm for q(.). If the call returns 1, return 1.
3. Run one of the general factory function algorithms for r(.). If the call returns 0, return 1. Otherwise, return 0. This step involves building polynomials that converge to r(.), as described earlier in this section.

Example: If f(λ) = (1−exp(λ))/(1−exp(1)), then f is bounded from above by h(λ) = λ, and from below by ω(λ) = λ2. As a result, q(λ) = λ, and r(λ) = (2 − exp(1))/(1 − exp(1)) if λ = 0; 1/(exp(1)−1) if λ = 1; and (−λ*(1 − exp(1)) − exp(λ) + 1)/(λ*(1 − exp(1))*(λ − 1)) otherwise. This can be computed using the following SymPy code: `fx=(1-exp(x))/(1-exp(1)); h=x; phi=x**2; q=(phi/h); r=(1-fx/h)/(1-q); r=Piecewise((limit(r, x, 0), Eq(x,0)), (limit(r,x,1),Eq(x,1)), (r,True)).simplify(); pprint(r)`.

Other functions that equal 0 or 1 at the endpoints 0 and/or 1. If f does not fully admit an approximation scheme under the convex, concave, twice differentiable, and Hölder classes:

If f(0) = And f(1) = Method
> 0 and < 1 1 Use the algorithm for certain functions that equal 0 at 0, but with f(λ) = 1 − f(1−λ).
Inverted coin: Instead of the usual input coin, use a coin that does the following: "Flip the input coin and return 1 minus the result."
Inverted result: If the overall algorithm would return 0, it returns 1 instead, and vice versa.
> 0 and < 1 0 Algorithm for certain functions that equal 0 at 0, but with f(λ) = f(1−λ). (For example, cosh(λ)−1 becomes cosh(1−λ)−1.)
Inverted coin.
1 0 Algorithm for certain functions that equal 0 at 0 and 1 at 1, but with f(λ) = 1−f(λ).
Inverted result.
1 > 0 and ≤ 1 Algorithm for certain functions that equal 0 at 0, but with f(λ) = 1−f(λ).
Inverted result.

Specific functions. My GitHub repository includes SymPy code for a method, `approxscheme2`, to build a polynomial approximation scheme for certain factory functions.

### Schemes That Don't Work

In the academic literature (papers and books), there are many approximation schemes that involve polynomials that converge from above and below to a function. Unfortunately, most of them cannot be used as is to simulate a function f in the Bernoulli Factory setting, because they don't ensure the consistency requirement described earlier.

The following are approximation schemes with counterexamples to consistency.

First scheme. In this scheme (Powell 1981)(5), let f be a C2 continuous function (a function with continuous "slope" and "slope-of-slope" functions) in [0, 1]. Then for every n≥1:

• fabove(n, k) = f(k/n) + M / (8*n).
• fbelow(n, k) = f(k/n) − M / (8*n).

Where M is not less than the maximum absolute value of f's slope-of-slope function (second derivative), and where k is an integer in the interval [0, n].

The counterexample involves the C2 continuous function g(λ) = sin(π*λ)/4 + 1/2.

For g, the coefficients for—

• the degree-2 upper polynomial in Bernstein form (fabove(5, k)) are [0.6542..., 0.9042..., 0.6542...], and
• the degree-4 upper polynomial in Bernstein form (fabove(6, k)) are [0.5771..., 0.7538..., 0.8271..., 0.7538..., 0.5771...].

The degree-2 polynomial lies above the degree-4 polynomial everywhere in [0, 1]. However, to ensure consistency, the degree-2 polynomial, once elevated to degree 4 and rewritten in Bernstein form, must have coefficients that are greater than or equal to those of the degree-4 polynomial.

• Once elevated to degree 4, the degree-2 polynomial's coefficients are [0.6542..., 0.7792..., 0.8208..., 0.7792..., 0.6542...].

As we can see, the elevated polynomial's coefficient 0.8208... is less than the corresponding coefficient 0.8271... for the degree-4 polynomial.

The rest of this section will note counterexamples involving other functions and schemes, without demonstrating them in detail.

Second scheme. In this scheme, let f be a Lipschitz continuous function in [0, 1] (that is, a continuous function in [0, 1] that has a defined slope at all points or "almost everywhere", and does not tend to a vertical slope anywhere). Then for every n≥2:

• fabove(n, k) = f(k/n) + L*(5/4) / sqrt(n).
• fbelow(n, k) = f(k/n) − L*(5/4) / sqrt(n).

Where L is the maximum absolute "slope", also known as the Lipschitz constant, and (5/4) is the so-called Popoviciu constant, and where k is an integer in the interval [0, n] (Lorentz 1986)(6), (Popoviciu 1935)(7).

There are two counterexamples here; together they show that this scheme can fail to ensure consistency, even if the set of functions is restricted to "smooth" functions (not just Lipschitz continuous functions):

1. The function f(λ) = min(λ, 1−λ)/2 is Lipschitz continuous with Lipschitz constant 1/2. (In addition, f has a kink at 1/2, so that it's not differentiable, but this is not essential for the counterexample.) The counterexample involves the degree-5 and degree-6 upper polynomials (fabove(5, k) and fabove(6, k)).
2. The function f = sin(4*π*λ)/4 + 1/2, a "smooth" function with Lipschitz constant π. The counterexample involves the degree-3 and degree-4 lower polynomials (fbelow(3, k) and fbelow(4, k)).

It is yet to be seen whether a counterexample exists for this scheme when n is restricted to powers of 2.

Third scheme. Same as the second scheme, but replacing (5/4) with the Sikkema constant, S = (4306+837*sqrt(6))/5832 (Lorentz 1986)(6), (Sikkema 1961)(8), which is slightly less than 1.09. In fact, the same counterexamples for the second scheme apply to this one, since this scheme merely multiplies the offset to bring the approximating polynomials closer to f.

Note on "clamping". For any approximation scheme, "clamping" the values of fbelow and fabove to fit the interval [0, 1] won't necessarily preserve the consistency requirement, even if the original scheme met that requirement.

Here is a counterexample that applies to any approximation scheme.

Let g and h be two polynomials in Bernstein form as follows:

• g has degree 5 and coefficients [10179/10000, 2653/2500, 9387/10000, 5049/5000, 499/500, 9339/10000].
• h has degree 6 and coefficients [10083/10000, 593/625, 9633/10000, 4513/5000, 4947/5000, 9473/10000, 4519/5000].

After elevating g's degree, g's coefficients are no less than h's, as required by the consistency property.

However, if we clamp coefficients above 1 to equal 1, so that g is now g′ with [1, 1, 9387/10000, 1, 499/500, 9339/10000] and h is now h′ with [1, 593/625, 9633/10000, 4513/5000, 4947/5000, 9473/10000, 4519/5000], and elevate g′ for coefficients [1, 1, 14387/15000, 19387/20000, 1499/1500, 59239/60000, 9339/10000], some of the coefficients of g′ are less than those of h′. Thus, for this pair of polynomials, clamping the coefficients will destroy the consistent approximation property.

## Approximate Bernoulli Factories

Although the schemes in the previous section don't work when building a family of polynomials that converge to a function f(λ), they are still useful for building an approximation to that function, in the form of a single polynomial, so that we get an approximate Bernoulli factory for f.

Here, f(λ) can be any function that maps the closed interval [0, 1] to [0, 1], even if it isn't continuous or a factory function (examples include the "step function" 0 if λ < 1/2 and 1 otherwise, or the function 2*min(λ, 1 − λ)). Continuous functions can be approximated arbitrarily well by an approximate Bernoulli factory (as a result of the so-called "Weierstrass approximation theorem"), but this is not the case in general for discontinuous functions.

To build an approximate Bernoulli factory:

1. We first find a polynomial in Bernstein form of degree n that is close to the desired function f.

The simplest choice for this polynomial has n+1 coefficients and its jth coefficient (starting at 0) is found as f(j/n), and this is used in the examples below. For this choice, if f is continuous, the polynomial can be brought arbitrarily close to f by choosing n high enough.

Whatever polynomial is used, the polynomial's coefficients must all lie in [0, 1].

2. Then, we use one of the algorithms in the section "Certain Polynomials" to simulate that polynomial, given its coefficients.

Examples of approximate Bernoulli factories. The schemes in the previous section give an upper bound on the error on approximating f with a degree-n polynomial in Bernstein form. For example, the third scheme does this when f is a Lipschitz continuous function (with Lipschitz constant L). To find a degree n such that f is approximated with a maximum error of ε, we need to solve the following equation for n:

• ε = L*((4306+837*sqrt(6))/5832) / sqrt(n).

This has the following solution:

• n = L2*(3604122*sqrt(6) + 11372525)/(17006112*ε2).

This is generally not an integer, so we use n = ceil(n) to get the solution if it's an integer, or the nearest integer that's bigger than the solution. This solution can be simplified further to n = ceil(59393*L2/(50000*ε2)), which bounds the previous solution from above.

Now, if f is a Lipschitz continuous factory function with Lipschitz constant L, the following algorithm (adapted from "Certain Polynomials") simulates a polynomial that approximates f with a maximum error of ε:

1. Calculate n as ceil(59393*L2/(50000*ε2)).
2. Flip the input coin n times, and let j be the number of times the coin returned 1 this way.
3. With probability f(j/n), return 1. Otherwise, return 0. (If f(j/n) can be an irrational number, see "Algorithms for General Irrational Constants" for ways to sample this irrational probability exactly.)

As another example, we use the first scheme in the previous section to get the following approximate algorithm for C2 continuous functions with maximum "slope-of-slope" of M:

1. Calculate n as ceil(M/(8*ε)) (upper bound of the solution to the equation ε = M/(8*n)).
2. Flip the input coin n times, and let j be the number of times the coin returned 1 this way.
3. With probability f(j/n), return 1. Otherwise, return 0.

We can proceed similarly with other methods that give an upper bound on the Bernstein-form polynomial approximation error, if they apply to the function f that we seek to approximate.

Approximate methods for linear functions. There are a number of approximate methods to simulate λ*c, where c > 1 and λ lies in [0, 1/c). ("Approximate" because this function touches 1 at 1/c, so it can't be a factory function.) Since the methods use only up to n flips, where n is an integer greater than 0, the approximation will be a polynomial of degree n.

• Henderson and Glynn (2003, Remark 4)(9) approximates the function λ*2 using a polynomial where the jth coefficient (starting at 0) is min((j/n)*2, 1−1/n). If g(λ) is that polynomial, then the error in approximating f is no greater than 1−g(1/2). g can be computed with the SymPy computer algebra library as follows: `from sympy.stats import *; g=2*E( Min(sum(Bernoulli(("B%d" % (i)),z) for i in range(n))/n,(S(1)-S(1)/n)/2))`.

• I found the following approximation for λ*c(10): "(1.) Set j to 0 and i to 0; (2.) If in, return 0; (3.) Flip the input coin, and if it returns 1, add 1 to j; (4.) (Estimate the probability and return 1 if it 'went over'.) If (j/(i+1)) ≥ 1/c, return 1; (5.) Add 1 to i and go to step 2." Here, λ*c is approximated by a polynomial where the jth coefficient (starting at 0) is min((j/n)*c, 1). If g(λ) is that polynomial, then the error in approximating f is no greater than 1−g(1/c).

• The previous approximation generalizes the one given in section 6 of Nacu and Peres (2005)(1), which approximates λ*2.

Note: Bias and variance are the two sources of error in a randomized estimation algorithm. Let g(λ) be an approximation of f(λ). The original Bernoulli factory for f, if it exists, has bias 0 and variance f(λ)*(1−f(λ)), but the approximate Bernoulli factory has bias g(λ) − f(λ) and variance g(λ)*(1−g(λ)). Unlike with bias, there are ways to reduce variance, which are outside the scope of this document. An estimation algorithm's mean squared error equals variance plus square of bias.

## Achievable Simulation Rates

In general, the number of input coin flips needed by any Bernoulli factory algorithm for a factory function f(λ) depends on how "smooth" the function f is.

The following table summarizes the rate of simulation (in terms of the number of input coin flips needed) that can be achieved in theory depending on f(λ), assuming the unknown probability of heads. In the table below:

• λ, lies in the interval [ε, 1−ε] for some ε > 0.
• Δ(n, r, λ) = O(max(sqrt(λ*(1−λ)/n),1/n)r), that is, O((1/n)r) near λ = 0 or 1, and O((1/n)r/2) elsewhere. (O(h(n)) roughly means "bounded from above by h(n) times a constant".)
Property of simulation Property of f
Requires no more than n input coin flips. If and only if f can be written as a polynomial in Bernstein form of degree n with coefficients in [0, 1] (Goyal and Sigman 2012)(11).
Requires a finite number of flips on average. Also known as "realizable" by Flajolet et al. (2010)(4). Only if f is Lipschitz continuous (Nacu and Peres 2005)(1).
Whenever f admits a fast simulation (Mendo 2019)(12).
Number of flips required, raised to power of r, is finite on average and has a tail that drops off uniformly for every λ. Only if f is Cr continuous (has r or more continuous derivatives, or "slope" functions) (Nacu and Peres 2005)(1).
Requires more than n flips with probability Δ(n, r + 1, λ), for integer r ≥ 0 and every λ. (The greater r is, the faster the simulation.) Only if f is Cr continuous and the rth derivative is in the Zygmund class (has no vertical slope) (Holtz et al. 2011)(13).
Requires more than n flips with probability Δ(n, α, λ), for non-integer α > 0 and every λ. (The greater α is, the faster the simulation.) If and only if f is Cr continuous and the rth derivative is (αr)-Hölder continuous, where r = floor(α) (Holtz et al. 2011)(13). Assumes f is bounded away from 0 and 1.
"Fast simulation" (requires more than n flips with a probability that decays exponentially as n gets large). Also known as "strongly realizable" by Flajolet et al. (2010)(4). If and only if f is real analytic (is C continuous, or has continuous kth derivative for every k, and agrees with its Taylor series "near" every point) (Nacu and Peres 2005)(1).
Average number of flips bounded from below by (f′(λ))2*λ*(1−λ)/(f(λ)*(1−f(λ))), where f′ is the first derivative of f. Whenever f admits a fast simulation (Mendo 2019)(12).

Notes:

1. By the results of Holtz et al., it is suspected that the target function f can't be simulated using a finite number of flips on average unless f is C2 continuous.
2. If a function is constant on some non-empty open interval in its domain, but is not constant on the whole domain, then it can't be real analytic.

## Complexity

The following note shows the complexity of the algorithm for 1/φ in the main article, where φ is the golden ratio.

Let E[N] be the expected (average) number of unbiased random bits (fair coin flips) generated by the algorithm.

Then, since each bit is independent, E[N] = 2*φ as shown below.

• Each iteration stops the algorithm with probability p = (1/2) + (1−(1/2)) * (1/φ) (1/2 for the initial bit and 1/φ for the recursive run; (1−(1/2)) because we're subtracting the (1/2) earlier on the right-hand side from 1).
• Thus, the expected (average) number of iterations is E[T] = 1/p by a well-known rejection sampling argument, since the algorithm doesn't depend on iteration counts.
• Each iteration uses 1 * (1/2) + (1 + E[N]) * (1/2) bits on average, so the whole algorithm uses E[N] = (1 * (1/2) + (1 + E[N]) * (1/2)) * E[T] bits on average (each iteration consumes either 1 bit with probability 1/2, or (1 + E[N]) bits with probability 1/2). This equation has the solution E[N] = 1 + sqrt(5) = 2*φ.

Also, on average, half of these flips (φ) show 1 and half show 0, since the bits are unbiased (the coin is fair).

A similar analysis to the one above can be used to find the expected (average) time complexity of many Bernoulli factory algorithms.

## Examples of Bernoulli Factory Approximation Schemes

The following are approximation schemes and hints to simulate a coin of probability f(λ) given an input coin with probability of heads of λ. The schemes were generated automatically using `approxscheme2` and have not been rigorously verified for correctness.

• Let f(λ) = min(1/2, λ). Then, for every integer n that's a power of 2, starting from 1:
• Detected to be concave and Lipschitz continuous using numerical methods, which may be inaccurate:
• fbelow(n, k) = f(k/n).
• fabove(n, k) = 9563/10000 if n<8; otherwise, f(k/n) + 322613/(250000*sqrt(n)).
• fbound(n) = [0, 1].
• Generated using tighter bounds than necessarily proven:
• fbelow(n, k) = f(k/n).
• fabove(n, k) = 371/500 if n<4; otherwise, f(k/n) + 967839/(2000000*sqrt(n)).
• fbound(n) = [0, 1].
• Let f(λ) = cosh(λ) − 1. Then simulate f by first flipping the input coin twice. If both flips return 0, return 0. Otherwise, flip the input coin. If it returns 0, return 0. Otherwise, simulate g(λ) (a function described below) and return the result.
Let g(λ) = 1/4 if λ = 0; -(cosh(λ) − 1)/(λ2*(λ − 2)) otherwise. Then, for every integer n that's a power of 2, starting from 1:
• Detected to be convex and twice differentiable using numerical methods, which may be inaccurate:
• fbelow(n, k) = 503/2500 if n<4; otherwise, g(k/n) − 136501/(700000*n).
• fabove(n, k) = g(k/n).
• fbound(n) = [0, 1].
• Generated using tighter bounds than necessarily proven:
• fbelow(n, k) = 2301/10000 if n<4; otherwise, g(k/n) − 1774513/(22400000*n).
• fabove(n, k) = g(k/n).
• fbound(n) = [0, 1].
• Let f(λ) = cosh(λ) − 3/4. Then, for every integer n that's a power of 2, starting from 1:
• The function was detected to be convex and twice differentiable, leading to:
• fbelow(n, k) = 487/2500 if n<4; otherwise, f(k/n) − 154309/(700000*n).
• fabove(n, k) = f(k/n).
• fbound(n) = [0, 1].
• Generated using tighter bounds than necessarily proven:
• fbelow(n, k) = 1043/5000 if n<4; otherwise, f(k/n) − 462927/(2800000*n).
• fabove(n, k) = f(k/n).
• fbound(n) = [0, 1].
• Let f(λ) = cosh(λ)/4 − 1/4. Then simulate f by first flipping the input coin twice. If both flips return 0, return 0. Otherwise, flip the input coin. If it returns 0, return 0. Otherwise, simulate g(λ) (a function described below) and return the result.
Let g(λ) = 1/16 if λ = 0; -(cosh(λ) − 1)/(4*λ2*(λ − 2)) otherwise. Then, for every integer n that's a power of 2, starting from 1:
• Detected to be convex and twice differentiable using numerical methods, which may be inaccurate:
• fbelow(n, k) = 503/10000 if n<4; otherwise, g(k/n) − 17063/(350000*n).
• fabove(n, k) = g(k/n).
• fbound(n) = [0, 1].
• Generated using tighter bounds than necessarily proven:
• fbelow(n, k) = 23/400 if n<4; otherwise, g(k/n) − 221819/(11200000*n).
• fabove(n, k) = g(k/n).
• fbound(n) = [0, 1].
• Let f(λ) = exp(λ)/4 − 1/4. Then simulate f by first flipping the input coin twice. If both flips return 0, return 0. Otherwise, simulate g(λ) (a function described below) and return the result.
Let g(λ) = 1/8 if λ = 0; -(exp(λ) − 1)/(4*λ*(λ − 2)) otherwise. Then, for every integer n that's a power of 2, starting from 1:
• Detected to be convex and twice differentiable using numerical methods, which may be inaccurate:
• fbelow(n, k) = 7/100 if n<4; otherwise, g(k/n) − 9617/(43750*n).
• fabove(n, k) = g(k/n).
• fbound(n) = [0, 1].
• Generated using tighter bounds than necessarily proven:
• fbelow(n, k) = 207/2000 if n<4; otherwise, g(k/n) − 9617/(112000*n).
• fabove(n, k) = g(k/n).
• fbound(n) = [0, 1].
• Let f(λ) = sin(3*λ)/2. Then, for every integer n that's a power of 2, starting from 1:
• The function was detected to be concave and twice differentiable, leading to:
• fbelow(n, k) = f(k/n).
• fabove(n, k) = 1319/2000 if n<4; otherwise, f(k/n) + 9/(14*n).
• fbound(n) = [0, 1].
• Generated using tighter bounds than necessarily proven:
• fbelow(n, k) = f(k/n).
• fabove(n, k) = 1299/2000 if n<4; otherwise, f(k/n) + 135/(224*n).
• fbound(n) = [0, 1].
• Let f(λ) = exp(−λ). Then, for every integer n that's a power of 2, starting from 1:
• The function was detected to be convex and twice differentiable, leading to:
• fbelow(n, k) = 3321/10000 if n<4; otherwise, f(k/n) − 1/(7*n).
• fabove(n, k) = f(k/n).
• fbound(n) = [0, 1].
• Generated using tighter bounds than necessarily proven:
• fbelow(n, k) = 861/2500 if n<4; otherwise, f(k/n) − 3/(32*n).
• fabove(n, k) = f(k/n).
• fbound(n) = [0, 1].
• Let f(λ) = 3/4 − sqrt(−λ*(λ − 1)). Then, for every integer n that's a power of 2, starting from 1:
• Detected to be convex and (1/2)-Hölder continuous using numerical methods, which may be inaccurate:
• fbelow(n, k) = f(k/n) − 1545784563/(400000000*n1/4).
• fabove(n, k) = f(k/n).
• Generated using tighter bounds than necessarily proven:
• fbelow(n, k) = f(k/n) − 26278337571/(25600000000*n1/4).
• fabove(n, k) = f(k/n).
• Let f(λ) = 3*sin(sqrt(3)*sqrt(sin(2*λ)))/4 + 1/50. Then, for every integer n that's a power of 2, starting from 1:
• Detected to be (1/2)-Hölder continuous using numerical methods, which may be inaccurate:
• fbelow(n, k) = f(k/n) − 709907859/(100000000*n1/4).
• fabove(n, k) = f(k/n) + 709907859/(100000000*n1/4).
• Generated using tighter bounds than necessarily proven:
• fbelow(n, k) = f(k/n) − 6389170731/(3200000000*n1/4).
• fabove(n, k) = f(k/n) + 6389170731/(3200000000*n1/4).
• Let f(λ) = 3/4 − sqrt(−λ*(λ − 1)). Then, for every integer n that's a power of 2, starting from 1:
• Detected to be convex and (1/2)-Hölder continuous using numerical methods, which may be inaccurate:
• fbelow(n, k) = f(k/n) − 1545784563/(400000000*n1/4).
• fabove(n, k) = f(k/n).
• Generated using tighter bounds than necessarily proven:
• fbelow(n, k) = f(k/n) − 26278337571/(25600000000*n1/4).
• fabove(n, k) = f(k/n).
• Let f(λ) = sin(π*λ)/4 + 1/2. Then, for every integer n that's a power of 2, starting from 1:
• The function was detected to be concave and twice differentiable, leading to:
• fbelow(n, k) = f(k/n).
• fabove(n, k) = 4191/5000 if n<4; otherwise, f(k/n) + 246741/(700000*n).
• fbound(n) = [0, 1].
• Generated using tighter bounds than necessarily proven:
• fbelow(n, k) = f(k/n).
• fabove(n, k) = 8313/10000 if n<4; otherwise, f(k/n) + 14557719/(44800000*n).
• fbound(n) = [0, 1].
• Let f(λ) = sqrt(λ). Then simulate f by first simulating a polynomial with the following coefficients: [0, 1]. If it returns 0, return 1. Otherwise, simulate g(λ) (a function described below) and return 1 minus the result. During the simulation, instead of flipping the input coin as usual, a different coin is flipped which does the following: "Flip the input coin and return 1 minus the result."
Let g(λ) = 1/2 if λ = 0; (1 − sqrt(1 − λ))/λ otherwise. Then, for every integer n that's a power of 2, starting from 1:
• Detected to be convex and (1/2)-Hölder continuous using numerical methods, which may be inaccurate:
• fbelow(n, k) = g(k/n) − 147735488601/(80000000*n1/4).
• fabove(n, k) = g(k/n).
• Generated using tighter bounds than necessarily proven:
• fbelow(n, k) = g(k/n) − 147735488601/(5120000000*n1/4).
• fabove(n, k) = g(k/n).
• Let f(λ) = 1 − λ2. Then simulate f by first flipping the input coin. If it returns 0, return 1. Otherwise, flip the input coin twice. If both flips return 0, return 1. Otherwise, simulate g(λ) (a function described below) and return 1 minus the result.
Let g(λ) = 1/(2 − λ). Then, for every integer n that's a power of 2, starting from 1:
• The function was detected to be convex and twice differentiable, leading to:
• fbelow(n, k) = 857/2000 if n<4; otherwise, g(k/n) − 2/(7*n).
• fabove(n, k) = g(k/n).
• fbound(n) = [0, 1].
• Generated using tighter bounds than necessarily proven:
• fbelow(n, k) = 4709/10000 if n<4; otherwise, g(k/n) − 13/(112*n).
• fabove(n, k) = g(k/n).
• fbound(n) = [0, 1].
• Let f(λ) = sqrt(1 − λ2). Then simulate f by first flipping the input coin. If it returns 0, return 1. Otherwise, flip the input coin. If it returns 0, return 1. Otherwise, simulate g(λ) (a function described below) and return 1 minus the result.
Let g(λ) = 1/2 if λ = 0; (1 − sqrt(1 − λ2))/λ2 otherwise. Then, for every integer n that's a power of 2, starting from 1:
• Detected to be convex and (1/2)-Hölder continuous using numerical methods, which may be inaccurate:
• fbelow(n, k) = g(k/n) − 405238440128399386484736/(1953125*n1/4).
• fabove(n, k) = g(k/n).
• Generated using tighter bounds than necessarily proven:
• fbelow(n, k) = g(k/n) − 6331850627006240413824/(1953125*n1/4).
• fabove(n, k) = g(k/n).
• Let f(λ) = cos(π*λ)/4 + 1/2. Then, for every integer n that's a power of 2, starting from 1:
• The function was detected to be twice differentiable, leading to:
• fbelow(n, k) = 809/5000 if n<4; otherwise, f(k/n) − 246741/(700000*n).
• fabove(n, k) = 4191/5000 if n<4; otherwise, f(k/n) + 246741/(700000*n).
• fbound(n) = [0, 1].
• Generated using tighter bounds than necessarily proven:
• fbelow(n, k) = 409/2000 if n<4; otherwise, f(k/n) − 8142453/(44800000*n).
• fabove(n, k) = 1591/2000 if n<4; otherwise, f(k/n) + 8142453/(44800000*n).
• fbound(n) = [0, 1].
• Let f(λ) = λ*sin(7*π*λ)/4 + 1/2. Then, for every integer n that's a power of 2, starting from 1:
• Detected to be twice differentiable using numerical methods, which may be inaccurate:
• fbelow(n, k) = 523/10000 if n<64; otherwise, f(k/n) − 11346621/(700000*n).
• fabove(n, k) = 1229/1250 if n<64; otherwise, f(k/n) + 11346621/(700000*n).
• fbound(n) = [0, 1].
• Generated using tighter bounds than necessarily proven:
• fbelow(n, k) = 681/10000 if n<32; otherwise, f(k/n) − 34039863/(4480000*n).
• fabove(n, k) = 4837/5000 if n<32; otherwise, f(k/n) + 34039863/(4480000*n).
• fbound(n) = [0, 1].
• Let f(λ) = sin(4*π*λ)/4 + 1/2. Then, for every integer n that's a power of 2, starting from 1:
• The function was detected to be twice differentiable, leading to:
• fbelow(n, k) = 737/10000 if n<32; otherwise, f(k/n) − 1973921/(350000*n).
• fabove(n, k) = 9263/10000 if n<32; otherwise, f(k/n) + 1973921/(350000*n).
• fbound(n) = [0, 1].
• Generated using tighter bounds than necessarily proven:
• fbelow(n, k) = 1123/10000 if n<32; otherwise, f(k/n) − 1973921/(448000*n).
• fabove(n, k) = 8877/10000 if n<32; otherwise, f(k/n) + 1973921/(448000*n).
• fbound(n) = [0, 1].
• Let f(λ) = sin(6*π*λ)/4 + 1/2. Then, for every integer n that's a power of 2, starting from 1:
• The function was detected to be twice differentiable, leading to:
• fbelow(n, k) = 517/10000 if n<64; otherwise, f(k/n) − 2220661/(175000*n).
• fabove(n, k) = 9483/10000 if n<64; otherwise, f(k/n) + 2220661/(175000*n).
• fbound(n) = [0, 1].
• Generated using tighter bounds than necessarily proven:
• fbelow(n, k) = 1043/10000 if n<64; otherwise, f(k/n) − 104371067/(11200000*n).
• fabove(n, k) = 8957/10000 if n<64; otherwise, f(k/n) + 104371067/(11200000*n).
• fbound(n) = [0, 1].
• Let f(λ) = sin(2*λ)/2. Then, for every integer n that's a power of 2, starting from 1:
• The function was detected to be concave and twice differentiable, leading to:
• fbelow(n, k) = f(k/n).
• fabove(n, k) = 2851/5000 if n<4; otherwise, f(k/n) + 2/(7*n).
• fbound(n) = [0, 1].
• Generated using tighter bounds than necessarily proven:
• fbelow(n, k) = f(k/n).
• fabove(n, k) = 5613/10000 if n<4; otherwise, f(k/n) + 1/(4*n).
• fbound(n) = [0, 1].
• Let f(λ) = sin(4*π*λ)/4 + 1/2. Then, for every integer n that's a power of 2, starting from 1:
• The function was detected to be twice differentiable, leading to:
• fbelow(n, k) = 737/10000 if n<32; otherwise, f(k/n) − 1973921/(350000*n).
• fabove(n, k) = 9263/10000 if n<32; otherwise, f(k/n) + 1973921/(350000*n).
• fbound(n) = [0, 1].
• Generated using tighter bounds than necessarily proven:
• fbelow(n, k) = 1123/10000 if n<32; otherwise, f(k/n) − 1973921/(448000*n).
• fabove(n, k) = 8877/10000 if n<32; otherwise, f(k/n) + 1973921/(448000*n).
• fbound(n) = [0, 1].
• Let f(λ) = λ2/2 + 1/10 if λ ≤ 1/2; λ/2 − 1/40 otherwise. Then, for every integer n that's a power of 2, starting from 1:
• Detected to be convex and twice differentiable using numerical methods, which may be inaccurate:
• fbelow(n, k) = 321/5000 if n<4; otherwise, f(k/n) − 1/(7*n).
• fabove(n, k) = f(k/n).
• fbound(n) = [0, 1].
• Generated using tighter bounds than necessarily proven:
• fbelow(n, k) = 693/10000 if n<4; otherwise, f(k/n) − 55/(448*n).
• fabove(n, k) = f(k/n).
• fbound(n) = [0, 1].
• Let f(λ) = λ/2 if λ ≤ 1/2; (4*λ − 1)/(8*λ) otherwise. Then, for every integer n that's a power of 2, starting from 1:
• Detected to be concave and twice differentiable using numerical methods, which may be inaccurate:
• fbelow(n, k) = f(k/n).
• fabove(n, k) = 893/2000 if n<4; otherwise, f(k/n) + 2/(7*n).
• fbound(n) = [0, 1].
• Generated using tighter bounds than necessarily proven:
• fbelow(n, k) = f(k/n).
• fabove(n, k) = 4197/10000 if n<4; otherwise, f(k/n) + 5/(28*n).
• fbound(n) = [0, 1].
• Let f(λ) = 1/2 − sqrt(1 − 2*λ)/2 if λ < 1/2; sqrt(2*λ − 1)/2 + 1/2 otherwise. Then, for every integer n that's a power of 2, starting from 1:
• Detected to be (1/2)-Hölder continuous using numerical methods, which may be inaccurate:
• fbelow(n, k) = f(k/n) − 1545784563/(400000000*n1/4).
• fabove(n, k) = f(k/n) + 1545784563/(400000000*n1/4).
• Generated using tighter bounds than necessarily proven:
• fbelow(n, k) = f(k/n) − 10820491941/(12800000000*n1/4).
• fabove(n, k) = f(k/n) + 10820491941/(12800000000*n1/4).
• Let f(λ) = 1/2 − sqrt(1 − 2*λ)/4 if λ < 1/2; sqrt(2*λ − 1)/4 + 1/2 otherwise. Then, for every integer n that's a power of 2, starting from 1:
• Detected to be (1/2)-Hölder continuous using numerical methods, which may be inaccurate:
• fbelow(n, k) = f(k/n) − 772969563/(400000000*n1/4).
• fabove(n, k) = f(k/n) + 772969563/(400000000*n1/4).
• Generated using tighter bounds than necessarily proven:
• fbelow(n, k) = 193/5000 if n<16; otherwise, f(k/n) − 5410786941/(12800000000*n1/4).
• fabove(n, k) = 4807/5000 if n<16; otherwise, f(k/n) + 5410786941/(12800000000*n1/4).
• fbound(n) = [0, 1].
• Let f(λ) = λ/2 + (1 − 2*λ)3/2/12 − 1/12 if λ < 0; λ/2 + (2*λ − 1)3/2/12 − 1/12 if λ ≥ 1/2; λ/2 + (1 − 2*λ)3/2/12 − 1/12 otherwise. Then, for every integer n that's a power of 2, starting from 1:
• Detected to be convex and Lipschitz continuous using numerical methods, which may be inaccurate:
• fbelow(n, k) = f(k/n) − 322613/(500000*sqrt(n)).
• fabove(n, k) = f(k/n).
• Generated using tighter bounds than necessarily proven:
• fbelow(n, k) = f(k/n) − 3548743/(32000000*sqrt(n)).
• fabove(n, k) = f(k/n).
• Let f(λ) = 1/2 − sqrt(1 − 2*λ)/4 if λ < 1/2; sqrt(2*λ − 1)/4 + 1/2 otherwise. Then, for every integer n that's a power of 2, starting from 1:
• Detected to be (1/2)-Hölder continuous using numerical methods, which may be inaccurate:
• fbelow(n, k) = f(k/n) − 772969563/(400000000*n1/4).
• fabove(n, k) = f(k/n) + 772969563/(400000000*n1/4).
• Generated using tighter bounds than necessarily proven:
• fbelow(n, k) = 193/5000 if n<16; otherwise, f(k/n) − 5410786941/(12800000000*n1/4).
• fabove(n, k) = 4807/5000 if n<16; otherwise, f(k/n) + 5410786941/(12800000000*n1/4).
• fbound(n) = [0, 1].
• Let f(λ) = 1/2 − sqrt(1 − 2*λ)/8 if λ < 1/2; sqrt(2*λ − 1)/8 + 1/2 otherwise. Then, for every integer n that's a power of 2, starting from 1:
• Detected to be (1/2)-Hölder continuous using numerical methods, which may be inaccurate:
• fbelow(n, k) = 333/10000 if n<64; otherwise, f(k/n) − 386562063/(400000000*n1/4).
• fabove(n, k) = 9667/10000 if n<64; otherwise, f(k/n) + 386562063/(400000000*n1/4).
• fbound(n) = [0, 1].
• Generated using tighter bounds than necessarily proven:
• fbelow(n, k) = 451/2000 if n<4; otherwise, f(k/n) − 2705934441/(12800000000*n1/4).
• fabove(n, k) = 1549/2000 if n<4; otherwise, f(k/n) + 2705934441/(12800000000*n1/4).
• fbound(n) = [0, 1].
• Let f(λ) = λ/4 + (1 − 2*λ)3/2/24 + 5/24 if λ < 0; λ/4 + (2*λ − 1)3/2/24 + 5/24 if λ ≥ 1/2; λ/4 + (1 − 2*λ)3/2/24 + 5/24 otherwise. Then, for every integer n that's a power of 2, starting from 1:
• Detected to be convex and Lipschitz continuous using numerical methods, which may be inaccurate:
• fbelow(n, k) = 443/5000 if n<4; otherwise, f(k/n) − 322613/(1000000*sqrt(n)).
• fabove(n, k) = f(k/n).
• fbound(n) = [0, 1].
• Generated using tighter bounds than necessarily proven:
• fbelow(n, k) = 1111/5000 if n<4; otherwise, f(k/n) − 3548743/(64000000*sqrt(n)).
• fabove(n, k) = f(k/n).
• fbound(n) = [0, 1].
• Let f(λ) = 3*λ/2 if λ ≤ 1 − λ; 3/2 − 3*λ/2 otherwise. Then, for every integer n that's a power of 2, starting from 1:
• Detected to be concave and Lipschitz continuous using numerical methods, which may be inaccurate:
• fbelow(n, k) = f(k/n).
• fabove(n, k) = 124/125 if n<64; otherwise, f(k/n) + 967839/(500000*sqrt(n)).
• fbound(n) = [0, 1].
• Generated using tighter bounds than necessarily proven:
• fbelow(n, k) = f(k/n).
• fabove(n, k) = 1863/2000 if n<64; otherwise, f(k/n) + 2903517/(2000000*sqrt(n)).
• fbound(n) = [0, 1].
• Let f(λ) = 9*λ/5 if λ ≤ 1 − λ; 9/5 − 9*λ/5 otherwise. Then, for every integer n that's a power of 2, starting from 1:
• Detected to be concave and Lipschitz continuous using numerical methods, which may be inaccurate:
• fbelow(n, k) = f(k/n).
• fabove(n, k) = f(k/n) + 2903517/(1250000*sqrt(n)).
• Generated using tighter bounds than necessarily proven:
• fbelow(n, k) = f(k/n).
• fabove(n, k) = f(k/n) + 8710551/(5000000*sqrt(n)).
• Let f(λ) = min(λ, 1 − λ). Then, for every integer n that's a power of 2, starting from 1:
• Detected to be concave and Lipschitz continuous using numerical methods, which may be inaccurate:
• fbelow(n, k) = f(k/n).
• fabove(n, k) = 9563/10000 if n<8; otherwise, f(k/n) + 322613/(250000*sqrt(n)).
• fbound(n) = [0, 1].
• Generated using tighter bounds than necessarily proven:
• fbelow(n, k) = f(k/n).
• fabove(n, k) = 123/125 if n<4; otherwise, f(k/n) + 967839/(1000000*sqrt(n)).
• fbound(n) = [0, 1].
• Let f(λ) = λ/2 if λ ≤ 1 − λ; 1/2 − λ/2 otherwise. Then, for every integer n that's a power of 2, starting from 1:
• Detected to be concave and Lipschitz continuous using numerical methods, which may be inaccurate:
• fbelow(n, k) = f(k/n).
• fabove(n, k) = 5727/10000 if n<4; otherwise, f(k/n) + 322613/(500000*sqrt(n)).
• fbound(n) = [0, 1].
• Generated using tighter bounds than necessarily proven:
• fbelow(n, k) = f(k/n).
• fabove(n, k) = 123/250 if n<4; otherwise, f(k/n) + 967839/(2000000*sqrt(n)).
• fbound(n) = [0, 1].
• Let f(λ) = 19*λ/20 if λ ≤ 1 − λ; 19/20 − 19*λ/20 otherwise. Then, for every integer n that's a power of 2, starting from 1:
• Detected to be concave and Lipschitz continuous using numerical methods, which may be inaccurate:
• fbelow(n, k) = f(k/n).
• fabove(n, k) = 1817/2000 if n<8; otherwise, f(k/n) + 6129647/(5000000*sqrt(n)).
• fbound(n) = [0, 1].
• Generated using tighter bounds than necessarily proven:
• fbelow(n, k) = f(k/n).
• fabove(n, k) = 2337/2500 if n<4; otherwise, f(k/n) + 18388941/(20000000*sqrt(n)).
• fbound(n) = [0, 1].
• Let f(λ) = min(1/8, 3*λ). Then, for every integer n that's a power of 2, starting from 1:
• Detected to be concave and Lipschitz continuous using numerical methods, which may be inaccurate:
• fbelow(n, k) = f(k/n).
• fabove(n, k) = 4047/5000 if n<32; otherwise, f(k/n) + 967839/(250000*sqrt(n)).
• fbound(n) = [0, 1].
• Generated using tighter bounds than necessarily proven:
• fbelow(n, k) = f(k/n).
• fabove(n, k) = 171/400 if n<4; otherwise, f(k/n) + 967839/(1600000*sqrt(n)).
• fbound(n) = [0, 1].