Open Questions on the Bernoulli Factory Problem
Background
Suppose there is a coin that shows heads with an unknown probability, $\lambda$. The goal is to use that coin (and possibly also a fair coin) to build a “new” coin that shows heads with a probability that depends on $\lambda$, call it $f(\lambda)$. This is the Bernoulli factory problem, and it can be solved for a function $f(\lambda)$ only if it’s continuous. (For example, flipping the coin twice and taking heads only if exactly one coin shows heads, the probability $2\lambda(1-\lambda)$ can be simulated.)
This page contains several questions about the Bernoulli factory problem. Answers to them will greatly improve my pages on this site about Bernoulli factories. If you can answer any of them, post an issue in the GitHub issues page.
Note: The Bernoulli factory problem is a special case of a more general mathematical problem that I call “The Sampling Problem”.
Contents
- Background
- Contents
- Polynomials that approach a Bernoulli factory function “fast”
- Other Questions
- End Notes
- Notes
Polynomials that approach a Bernoulli factory function “fast”
This question involves solving the Bernoulli factory problem with polynomials.1
In this question, a polynomial $P(x)$ is written in Bernstein form of degree $n$ if it is written as—
\[P(x)=\sum_{k=0}^n a_k {n \choose k} x^k (1-x)^{n-k},\]where the real numbers $a_0, …, a_n$ are the polynomial’s Bernstein coefficients.
The degree-$n$ Bernstein polynomial of an arbitrary function $f(x)$ has Bernstein coefficients $a_k = f(k/n)$. In general, this Bernstein polynomial differs from $f$ even if $f$ is a polynomial.
Main Question
Suppose $f:[0,1]\to [0,1]$ is continuous and belongs to a large class of functions (for example, the $k$-th derivative, $k\ge 0$, is continuous, Lipschitz continuous, concave, strictly increasing, or bounded variation, or $f$ is real analytic).
- (Exact Bernoulli factory): Compute the Bernstein coefficients of a sequence of polynomials ($g_n$) of degree 2, 4, 8, …, $2^i$, … that converge to $f$ from below and satisfy: $(g_{2n}-g_{n})$ is a polynomial with nonnegative Bernstein coefficients once it’s rewritten to a polynomial in Bernstein form of degree exactly $2n$. (See Note 3 in “End Notes”.) Assume $0\lt f(\lambda)\lt 1$ or a piecewise polynomial can come between $f$ or $1-f$ and the x-axis.
- (Approximate Bernoulli factory): Given $\epsilon > 0$, compute the Bernstein coefficients of a polynomial or rational function (of some degree $n$) that is within $\epsilon$ of $f$.
- (Series expansion of simple functions): Find a nonnegative random variable $X$ and a series $f(\lambda)=\sum_{a\ge 0}\gamma_a(\lambda)$ such that $\gamma_a(\lambda)/\mathbb{P}(X=a)$ (letting 0/0 equal 0) is a polynomial or rational function with rational Bernstein coefficients lying in $[0, 1]$. (See note 1 in “End Notes”.)
The convergence rate must be $O(1/n^{r/2})$ if the class has only functions with Lipschitz-continuous $(r-1)$-th derivative. The method may not introduce transcendental or trigonometric functions (as with Chebyshev interpolants).
Solving the Bernoulli factory problem with polynomials
An algorithm (Łatuszyński et al. 2009/2011)2 simulates a function that admits a Bernoulli factory $f(\lambda)$ via two sequences of polynomials that converge from above and below to that function. Roughly speaking, the algorithm works as follows:
- Generate U, a uniform random variate in $[0, 1]$.
- Flip the input coin (with a probability of heads of $\lambda$), then build an upper and lower bound for $f(\lambda)$, based on the outcomes of the flips so far. In this case, these bounds come from two degree-$n$ polynomials that approach $f$ as $n$ gets large, where $n$ is the number of coin flips so far in the algorithm.
- If U is less than or equal to the lower bound, return 1. If U is greater than the upper bound, return 0. Otherwise, go to step 2.
The result of the algorithm is 1 with probability exactly equal to $f(\lambda)$, or 0 otherwise.
However, the algorithm requires the polynomial sequences to meet certain requirements, one of which is:
For $f(\lambda)$ there must be a sequence of polynomials ($g_n$) in Bernstein form of degree 1, 2, 3, … that converge to $f$ from below and satisfy: $(g_{n+1}-g_{n})$ is a polynomial with nonnegative Bernstein coefficients once it’s rewritten to a polynomial in Bernstein form of degree exactly $n+1$ (see Note 3 in “End Notes”; Nacu and Peres (2005)3; Holtz et al. (2011)4). For $f(\lambda)=1-f(\lambda)$ there must likewise be a sequence of this kind.
A Matter of Efficiency
However, ordinary Bernstein polynomials converge to a function at the rate $\Omega(1/n)$ in general, a result known since Voronovskaya (1932)5 and a rate that will lead to an infinite expected number of coin flips in general. (See also my supplemental notes.)
But Lorentz (1966)6 showed that if the function is positive and has a continuous $k$-th derivative, there are polynomials with nonnegative Bernstein coefficients that converge at the rate $O(1/n^{k/2})$ (and thus can enable a finite expected number of coin flips if the function is “smooth” enough).
Thus, researchers have studied alternatives to Bernstein polynomials that improve the convergence rate for “smoother” functions. See Holtz et al. (2011)4, Sevy (1991)7, Waldron (2009)8, Costabile et al. (2005)9, Han (2003)10, Khosravian-Arab et al. (2018)11, and references therein; see also Micchelli (1973)12, Güntürk and Li (2021a)13, (2021b)14, Draganov (2024)15, and Tachev (2022)16.
These alternative polynomials usually come with results where the error bound is the desired $O(1/n^{k/2})$, but most of those results (with the notable exception of Sevy) have hidden constants with no upper bounds given, making them unimplementable (that is, it can’t be known beforehand whether a given polynomial will come close to the target function within a user-specified error tolerance).
A Conjecture on Polynomial Approximation
The following is a conjecture that could help reduce this problem to the problem of finding explicit error bounds when approximating a function by polynomials.
Let $f(\lambda):[0,1]\to(0,1)$ have a continuous $r$-th derivative, where $r\ge 1$, let $M$ be the maximum of the absolute value of $f$ and its derivatives up to the $r$-th derivative, and denote the Bernstein polynomial of degree $n$ of a function $g$ as $B_n(g)$. Let $W_{2^0}(\lambda), W_{2^1}(\lambda), …, W_{2^i}(\lambda),…$ be a sequence of bounded functions on [0, 1] that converge uniformly to $f$.
For each integer $n\ge 1$ that’s a power of 2, suppose that there is $D>0$ such that—
\[\text{abs}(f(\lambda)-B_n(W_n(\lambda))) \le DM/n^{r/2},\]whenever $0\le \lambda\le 1$. Then there is $C_0\ge D$ such that the polynomials $(g_n)$ in Bernstein form of degree 2, 4, 8, …, $2^i$, …, defined as $g_n=B_n(W_n(\lambda)) - C_0 M/n^{r/2}$, converge from below to $f$ and satisfy: $(g_{2n}-g_{n})$ is a polynomial with nonnegative Bernstein coefficients once it’s rewritten to a polynomial in Bernstein form of degree exactly $2n$.
Equivalently (see also Nacu and Peres (2005)3), there is $C_1>0$ such that the inequality—
\[0\le W_{2n}\left(\frac{k}{2n}\right) - \sum_{i=0}^k W_n\left(\frac{i}{n}\right)\sigma_{n,k,i}\le C_1M/n^{r/2},\tag{PB}\]holds true for each integer $n\ge 1$ that’s a power of 2 and whenever $0\le k\le 2n$, where $\sigma_{n,k,i} = {n\choose i}{n\choose {k-i}}/{2n \choose k}=\mathbb{P}(X_k=i)$ and $X_k$ is a hypergeometric($2n$, $k$, $n$) random variable.
$C_0$ or $C_1$ may depend on $r$ and the sequence $W_n$, but not on $f$ or $n$. When $C_0$ or $C_1$ exists, find a good upper bound for it.
Note: This conjecture may be easy to prove if $W_n$ reproduces polynomials of degree $(r-1)$ or less. But there are $B_n(W_n)$ (notably the iterated Boolean sum of Bernstein polynomials) that don’t do so and yet converge at the rate $O(n^{-r/2})$ for some $r\gt 2$. Also, see notes 3 and 4 in “End Notes”.
Note: I believe there is a counterexample to this conjecture, namely the sequence—
\[B_n(W_n(\lambda))=\frac{(T_n(1-2\lambda)+1)\varphi_n}{2 \mu_n} + 1/2,\]where $\varphi_n$ is a strictly decreasing sequence of positive numbers that tends slowly enough to 0, $\mu_n$ is the maximum Bernstein coefficient (in absolute value) of the degree-$n$ polynomial $(T_n(1-2\lambda)+1)/2$, and $T_n(x)$ is the Chebyshev polynomial of the first kind of degree $n$. $W_n$ is then a piecewise linear function that connects the Bernstein coefficients of $B_n(W_n(\lambda))$, so that $(W_n)$ is a sequence of bounded continuous functions that converges at an arbitrarily slow rate (depending on $\varphi_n$) to the constant 1/2. $B_n(W_n(\lambda))$ converges uniformly, at an exponential rate, to 1/2, so that $M = 1/2$.
If this counterexample is valid, the conjecture may still be true with an additional assumption on the convergence rate of $W_n$, say, $O(1/n)$ or $O(1/n^{r/2})$ or $O(1/n^{(r-1)/2})$.
Strategies
The following are some strategies for answering these questions:
- Verify my proofs for the results on error bounds for certain polynomials in “Results Used in Approximations By Polynomials”, including:
- Iterated Boolean sums (linear combinations of iterates) of Bernstein polynomials ($B_n(W_n) = f-(f-B_n(f))^k$; see Note 4 in “End Notes” later in this page): Propositions B10C and B10D.
- Linear combinations of Bernstein polynomials (see Costabile et al. (2005)9): Proposition B10.
- The Lorentz operator (Holtz et al. 2011)4.
- Find the hidden constants $\theta_\alpha$, $s$, and $D$ as well as those in Lemmas 15, 17 to 22, 24, and 25 in Holtz et al. (2011)4.
- Find polynomials of the following kinds and find explicit bounds, with no hidden constants, on the approximation error for those polynomials:
- Polynomial operators that preserve polynomials at a higher degree than linear functions.
- Operators that produce a degree-$n$ polynomial from $O(n^2)$ sample points.
- Polynomials built from samples at rational values of a function $f$ that cluster at a quadratic rate toward the endpoints (Adcock et al. 2019)17 (for example, values that converge to Chebyshev points $\cos(j\pi/n)$ with increasing $n$, or to Legendre points). See also 7, 8, and 12 of Trefethen, Approximation Theory and Approximation Practice, 2013.
Other Questions
- Let $f(\lambda):[0,1]\to [0,1]$ be writable as $f(\lambda)=\sum_{n\ge 0} a_n \lambda^n,$ where $a_n\ge 0$ is rational, $a_n$ is nonzero infinitely often, and $f(1)$ is irrational. Then what are simple criteria to determine whether there is $0\lt p\lt 1$ such that $0\le a_n\le p(1-p)^n$ and, if so, to find such $p$? Obviously, if $(a_n)$ is nowhere increasing then $1\gt p\ge a_0$.
- For each $r>0$, characterize the functions $f(\lambda)$ that admit a Bernoulli factory where the expected number of coin flips, raised to the power of $r$, is finite.
-
Multiple-output Bernoulli factories: Let $f(\lambda):[a, b] \to (0, 1)$ be continuous, where $0\lt a$, $a\lt b$, $b\lt 1$. Define the entropy bound as $h(f(\lambda))/h(\lambda),$ where $h(x)=-x \ln(x)-(1-x) \ln(1-x)$ is related to the Shannon entropy function. Then there is an algorithm that tosses heads with probability $f(\lambda)$ given a coin that shows heads with probability $\lambda$ and no other source of randomness (Keane and O’Brien 1994)18.
But, is there an algorithm for $f$ that produces multiple independent outputs rather than one and has an expected number of coin flips per output that is arbitrarily close to the entropy bound, uniformly for every $\lambda$ in $f$’s domain? Call such an algorithm an optimal factory. (See Nacu and Peres (2005, Question 1)3.) And, does the answer change if the algorithm has access to a fair coin in addition to the biased coin?
So far, constants as well as $\lambda$ and $1-\lambda$ do admit an optimal factory (see same work), and, as Yuval Peres (Jun. 24, 2021) told me, there is an efficient multiple-output algorithm for $f(\lambda) = \lambda/2$. But are there others? See an appendix in one of my articles for more information on my progress on the problem.
-
Pushdown automata and algebraic functions: A pushdown automaton is a finite state machine with an unbounded stack, driven by a biased coin with an unknown probability of heads, $\lambda$. Its stack starts with a single symbol. On each step, the machine flips the coin, then, based on the coin flip, the current state, and the top stack symbol, it moves to a new state (or keeps it unchanged) and replaces the top stack symbol with zero or more symbols. When the stack is empty, the machine stops and returns either 0 or 1 depending on the state it ends up at.
Let $f(\lambda)$ be continuous and map the open interval (0, 1) to itself. Mossel and Peres (2005)19 showed that a pushdown automaton can output 1 with probability $f(\lambda)$ only if $f$ is algebraic over the rational numbers (there is a nonzero polynomial $P(x, y)$ in two variables and whose coefficients are rational numbers, such that $P(x, f(x)) = 0$ for every $x$ in the domain of $f$). See an appendix in one of my articles for more information on my progress on the problem.
Prove or disprove:
- If $f$ is algebraic over rational numbers it can be simulated by a pushdown automaton.
- min($\lambda$, $1-\lambda$) and $\lambda^{1/p}$, for every prime $p\ge 3$, can be simulated by a pushdown automaton.
- Given that $f$ is algebraic over rational numbers, it can be simulated by a pushdown automaton if and only if its “critical exponent” is a dyadic number greater than $-1$ or has the form $-1-1/2^k$ for some integer $k\ge 1$. (See note 2 in “End Notes”.)
- Coin-flipping degree: Let $p(\lambda)$ be a polynomial that maps the closed unit interval to itself and satisfies $0\lt p(\lambda)\lt 1$ whenever $0\lt\lambda\lt 1$. Then its coin-flipping degree (Wästlund 1999)20 is the smallest value of $n$ such that $p$’s Bernstein coefficients of degree $n$ lie in the closed unit interval. Given that a polynomial’s degree is $m$ and its “standard” coefficients are integers, what are upper bounds (or even exact maximums) on its coin flipping degree?
- Simple simulation algorithms: References are sought to papers and books that describe irrational constants or Bernoulli factory functions (continuous functions mapping (0,1) to itself) in any of the following ways. Ideally they should involve only rational numbers and should not compute p-adic digit expansions.
- Simulation experiments that succeed with an irrational probability.
- Simple continued fraction expansions of irrational constants.
- Functions written as infinite power series with rational coefficients (see “Certain Power Series”).
- Irrational numbers written as series expansions with rational coefficients (see “Certain Converging Series”).
- Functions whose integral is an irrational number.
- Closed shapes inside the unit square whose area is an irrational number. (Includes algorithms that tell whether a box lies inside, outside, or partly inside or outside the shape.) Example.
- Generate a uniform (x, y) point inside a closed shape, then return 1 with probability x. For what shapes is the expected value of x an irrational number? Example..
-
Given integer m≥0, rational number 0<k<exp(1) (and especially $k=1$ or $k=2$), and unknown heads probability 0≤λ≤1, find a Bernoulli factory for—
\[f(\lambda)=\exp(-(\exp(m+\lambda)-(k(m+\lambda)))) = \frac{\exp(-\exp(m+\lambda))}{\exp(-(k(m+\lambda)))},\tag{PD}\]that, as much as possible, avoids calculating $h(\lambda) = \exp(m+\lambda)-k(m+\lambda)$, and especially avoids floating-point operations; in this sense, the more implicitly the Bernoulli factory works with irrational or transcendental functions, the better. The right-hand side of (PD) can be implemented by ExpMinus and division Bernoulli factories, but is inefficient and heavyweight due to the need to calculate $\epsilon$ for the division factory. In addition there is a Bernoulli factory that first calculates $h(\lambda)$ and $floor(h(\lambda))$ using constructive reals and then runs ExpMinus, but this is likewise far from lightweight.
Prove or disprove:
- Given that $f:[0,1]\to (0,1]$ is convex, the polynomials $(g_n) = (B_n(f) - \max_{0\le\lambda\le 1}\text{abs}(B_n(f)(\lambda)-f(\lambda)))$ (where $n\ge 1$ is an integer power of 2) are in Bernstein form of degree $n$, converge to $f$ from below, and satisfy: $(g_{2n}-g_{n})$ is a polynomial with nonnegative Bernstein coefficients once it’s rewritten to a polynomial in Bernstein form of degree exactly $2n$. The same is true for the polynomials $(g_n) = (B_n(f) - \text{abs}(B_n(f)(1/2)-f(1/2)))$, if $f$ is also symmetric about 1/2.
-
Let $f:(D\subseteq [0, 1])\to [0,1]$. Given a coin that shows heads with probability $\lambda$ (which can be 0 or 1), it is possible to toss heads with probability $f(\lambda)$ using the coin and no other sources of randomness (and, thus, $f$ is strongly simulable) if and only if—
- $f$ is constant on its domain, or is continuous and polynomially bounded on its domain (polynomially bounded means, both $f$ and $1-f$ are bounded below by min($x^n$, $(1-x)^n$) for some integer $n$ (Keane and O’Brien 1994)18), and
- $f(0)$ is 0 or 1 if 0 is in $f$’s domain and $f(1)$ is 0 or 1 whenever 1 is in $f$’s domain, and
- if $f(0) = 0$ or $f(1) = 0$ or both, then there is a polynomial $g(x):[0,1]\to [0,1]$ with computable coefficients, such that $g(0) = f(0)$ and $g(1) = f(1)$ whenever 0 or 1, respectively, is in the domain of f, and such that $g(x)\gt f(x)$ for every $x$ in the domain of $f$, except at 0 and 1, and
- if $f(0) = 1$ or $f(1) = 1$ or both, then there is a polynomial $h(x):[0,1]\to [0,1]$ with computable coefficients, such that $h(0) = f(0)$ and $h(1) = f(1)$ whenever 0 or 1, respectively, is in the domain of $f$, and such that $h(x)\lt f(x)$ for every $x$ in the domain of f, except at 0 and 1.
A condition such as “0 is not in the domain of $f$, or $f$ can be extended to a Lipschitz-continuous function on $[0, \epsilon)$ for some $\epsilon>0$” does not work. A counterexample is $f(x)=(\sin(1/x)/4+1/2)\cdot(1-(1-x)^n)$ for $n\ge 1$ ($f(0)=0$), which is strongly simulable at 0 despite not being Lipschitz at 0. ($(1-x)^n$ is the probability of the biased coin showing zero $n$ times in a row.) Keane and O’Brien already showed strong simulability when $D$ contains neither 0 nor 1.
End Notes
Note 1: An example of $X$ is $\mathbb{P}(X=a) = p (1-p)^a$ where $0 < p < 1$ is a known rational. This question’s requirements imply that $\sum_{a\ge 0}\max_\lambda \text{abs}(\gamma_a(\lambda)) \le 1$. The proof of Keane and O’Brien (1994)18 produces a convex combination of polynomials with 0 and 1 as Bernstein coefficients, but the combination is difficult to construct (it requires finding maximums, for example) and so this proof does not appropriately answer this question.
Note 2: On pushdown automata: Etessami and Yannakakis (2009)21 showed that pushdown automata with rational probabilities are equivalent to recursive Markov chains (with rational transition probabilities), and that for every recursive Markov chain, the system of polynomial equations has nonnegative coefficients. But this paper doesn’t deal with the case of recursive Markov chains where the transition probabilities cannot just be rational, but can also be $\lambda$ and $1-\lambda$ where $\lambda$ is an unknown rational or irrational probability of heads. Also, Banderier and Drmota (2014)22 showed the asymptotic behavior of power series solutions $f(\lambda)$ of a polynomial system, where both the series and the system have nonnegative real coefficients. Notably, functions of the form $\lambda^{1/p}$ where $p\ge 3$ is not a power of 2, are not possible solutions, because their so-called “critical exponent” is not dyadic. But the result seems not to apply to piecewise power series such as $\min(\lambda,1-\lambda)$, which are likewise algebraic functions.
Note 3: The condition on nonnegative Bernstein coefficients ensures that not only the polynomials “increase” to $f(\lambda)$, but also their Bernstein coefficients. This condition is equivalent in practice to the following statement (Nacu & Peres 2005)3. For every integer $n\ge 1$ that’s a power of 2, $a(2n, k)\ge\mathbb{E}[a(n, X_{n,k})]= \left(\sum_{i=0}^k a(n,i) {n\choose i}{n\choose {k-i}}/{2n\choose k}\right)$, where $a(n,k)$ is the degree-$n$ polynomial’s $k$-th Bernstein coefficient, where $0\le k\le 2n$ is an integer, and where $X_{n,k}$ is a hypergeometric($2n$, $k$, $n$) random variable. A hypergeometric($2n$, $k$, $n$) random variable is the number of “good” balls out of $k$ balls taken uniformly at random, all at once, from a bag containing $2n$ balls, $n$ of which are “good”. See also my MathOverflow question on finding bounds for hypergeometric variables.
Note 4: If $W_n(0)=f(0)$ and $W_n(1)=f(1)$ for every $n$, then the inequality $(PB)$ is automatically true when $k=0$ and $k=2n$, so that the statement has to be checked only for $0\lt k\lt 2n$. If, in addition, $W_n$ is symmetric about 1/2, so that $W_n(\lambda)=W_n(1-\lambda)$ whenever $0\le \lambda\le 1$, then the statement has to be checked only for $0\lt k\le n$ (since the values $\sigma_{n,k,i} = {n\choose i}{n\choose {k-i}}/{2n \choose k}$ are symmetric in that they satisfy $\sigma_{n,k,i}=\sigma_{n,k,k-i}$).
Special cases for this question are if $W_n = 2 f - B_n(f)$ and $r$ is 3 or 4, or $W_n = B_n(B_n(f))+3(f-B_n(f))$ and $r$ is 5 or 6; these cases correspond to the iterated Boolean sum of Bernstein polynomials: $B_n(W_n)=f-(f-B_n(f))^k$, which don’t reproduce polynomials of higher degree than linear functions, making it hard to find a bound better than $O(1/n)$ that satisfies the conjecture when $r\ge 3$.
Notes
-
See also the following questions on Mathematics Stack Exchange and MathOverflow: Converging polynomials, Error bounds, A conjecture, Hypergeometric random variable, Lorentz operators, Series representations. ↩
-
Łatuszyński, K., Kosmidis, I., Papaspiliopoulos, O., Roberts, G.O., “Simulating events of unknown probabilities via reverse time martingales”, arXiv:0907.4018v2 [stat.CO], 2009/2011. ↩
-
Nacu, Şerban, and Yuval Peres. “Fast simulation of new coins from old”, The Annals of Applied Probability 15, no. 1A (2005): 93-115. ↩ ↩2 ↩3 ↩4
-
Holtz, O., Nazarov, F., Peres, Y., “New Coins from Old, Smoothly”, Constructive Approximation 33 (2011). ↩ ↩2 ↩3 ↩4
-
E. Voronovskaya, “Détermination de la forme asymptotique d’approximation des fonctions par les polynômes de M. Bernstein”, 1932. ↩
-
G.G. Lorentz, “The degree of approximation by polynomials with positive coefficients”, 1966. ↩
-
Sevy, J., “Acceleration of convergence of sequences of simultaneous approximants”, dissertation, Drexel University, 1991. ↩
-
Waldron, S., “Increasing the polynomial reproduction of a quasi-interpolation operator”, Journal of Approximation Theory 161 (2009). ↩
-
Costabile, F., Gualtieri, M.I., Serra, S., “Asymptotic expansion and extrapolation for Bernstein polynomials with applications”, BIT 36 (1996) ↩ ↩2
-
Han, Xuli. “Multi-node higher order expansions of a function.” Journal of Approximation Theory 124.2 (2003): 242-253. https://doi.org/10.1016/j.jat.2003.08.001 ↩
-
Khosravian-Arab, Hassan, Mehdi Dehghan, and M. R. Eslahchi. “A new approach to improve the order of approximation of the Bernstein operators: theory and applications.” Numerical Algorithms 77 (2018): 111-150. ↩
-
Micchelli, Charles. “The saturation class and iterates of the Bernstein polynomials”, Journal of Approximation Theory 8, no. 1 (1973): 1-18. ↩
-
Güntürk, C. Sinan, and Weilin Li. “Approximation with one-bit polynomials in Bernstein form”, arXiv:2112.09183 (2021); Constructive Approximation, pp.1-30 (2022). ↩
-
Güntürk, C. Sinan, and Weilin Li. “Approximation of functions with one-bit neural networks”, arXiv:2112.09181 (2021). ↩
-
Draganov, B.R., “Simultaneous approximation by the Bernstein operator”, dissertation, Sofia University “St. Kliment Ohridski”, 2024. ↩
-
Tachev, Gancho. “Linear combinations of two Bernstein polynomials”, Mathematical Foundations of Computing, 2022. ↩
-
Adcock, B., Platte, R.B., Shadrin, A., “Optimal sampling rates for approximating analytic functions from pointwise samples, IMA Journal of Numerical Analysis 39(3), July 2019. ↩
-
Keane, M. S., and O’Brien, G. L., “A Bernoulli factory”, ACM Transactions on Modeling and Computer Simulation 4(2), 1994. ↩ ↩2 ↩3
-
Mossel, Elchanan, and Yuval Peres. New coins from old: computing with unknown bias. Combinatorica, 25(6), pp.707-724, 2005. ↩
-
Wästlund, J., “Functions arising by coin flipping”, 1999. ↩
-
Etessami, K. And Yannakakis, M., “Recursive Markov chains, stochastic grammars, and monotone systems of nonlinear equations”, Journal of the ACM 56(1), pp.1-66, 2009. ↩
-
Banderier, C. And Drmota, M., 2015. Formulae and asymptotics for coefficients of algebraic functions. Combinatorics, Probability and Computing, 24(1), pp.1-53. ↩