Randomization and Sampling Methods

Peter Occil

Abstract: This page discusses many ways applications can sample randomized content by transforming the numbers produced by an underlying source of random numbers, such as numbers produced by a pseudorandom number generator, and offers pseudocode and Python sample code for many of these methods.

2020 Mathematics Subject Classification: 68W20.

Introduction

This page catalogs randomization methods and sampling methods. A randomization or sampling method is driven by a "source of random numbers" and produces numbers or other values called random variates. These variates are the result of the randomization. (The "source of random numbers" is often simulated in practice by so-called pseudorandom number generators, or PRNGs.) This document covers many methods, including—

This page is focused on randomization and sampling methods that exactly sample from the distribution described, without introducing additional errors beyond those already present in the inputs (and assuming that an ideal "source of random numbers" is available). This will be the case if there is a finite number of values to choose from. But for the normal distribution and other distributions that take on an infinite number of values, there will always be some level of approximation involved; in this case, the focus of this page is on methods that minimize the error they introduce.

This document shows pseudocode for many of the methods, and sample Python code that implements many of the methods in this document is available, together with documentation for the code.

The randomization methods presented on this page assume we have an endless source of numbers chosen independently at random and with a uniform distribution. For more information, see "Sources of Random Numbers" in the appendix.

In general, the following are outside the scope of this document:

About This Document

This is an open-source document; for an updated version, see the source code or its rendering on GitHub. You can send comments on this document either on CodeProject or on the GitHub issues page.

Comments on any aspect of this document are welcome.

Contents

Notation

In this document:

Uniform Random Integers

This section shows how to derive independent uniform random integers with the help of a "source of random numbers" (or a device or program that simulates that source).

This section describes four methods: RNDINT, RNDINTEXC, RNDINTRANGE, RNDINTEXCRANGE. Of these, RNDINT, described next, can serve as the basis for the remaining methods.

RNDINT: Random Integers in [0, N]

In this document, RNDINT(maxInclusive) is the core method in this document; it derives independent uniform random integers in the interval [0, maxInclusive] from a "source of random numbers"(3). In the pseudocode below, which implements RNDINT:

Specifically:

If the underlying source produces: Then NEXTRAND() is: And MODULUS is:
Non-uniform numbers(4). The next bit from a new source formed by taking the underlying source's outputs as input to a randomness extraction technique to produce independent unbiased random bits (zeros or ones). 2.
Uniform numbers not described below. Same as above. 2n.
Uniform 32-bit non-negative integers. The next number from the source. 232.
Uniform 64-bit non-negative integers. The next number from the source. 264.
Uniform integers in the interval [0, n). The next number from the source. n.
Uniform numbers in the interval [0, 1) known to be evenly spaced by a number p (e.g., dSFMT). The next number from the source, multiplied by p. 1/p.
Uniform numbers in the interval [0, 1), where numbers in [0, 0.5) and those in [0.5, 1) are known to occur with equal probability (e.g., Java's Math.random()). 0 if the source outputs a number less than 0.5, or 1 otherwise. 2.
METHOD RndIntHelperNonPowerOfTwo(maxInclusive)
  if maxInclusive <= MODULUS - 1:
    // NOTE: If the programming language implements
    // division with two integers by discarding the result's
    // fractional part, the division can be used as is without
    // using a "floor" function.
    nPlusOne = maxInclusive + 1
    maxexc = floor((MODULUS - 1) / nPlusOne) * nPlusOne
    while true // until a value is returned
      ret = NEXTRAND()
      if ret < nPlusOne: return ret
      if ret < maxexc: return rem(ret, nPlusOne)
    end
  else
    cx = floor(maxInclusive / MODULUS) + 1
    while true // until a value is returned
       ret = cx * NEXTRAND()
       // NOTE: The addition operation below should
       // check for integer overflow and should reject the
       // number if overflow would result.
       ret = ret + RNDINT(cx - 1)
       if ret <= maxInclusive: return ret
    end
  end
END METHOD

METHOD RndIntHelperPowerOfTwo(maxInclusive)
  // NOTE: Finds the number of bits minus 1 needed
  // to represent MODULUS (in other words, the number
  // of random bits returned by NEXTRAND() ). This will
  // be a constant here, though.
  modBits = ln(MODULUS)/ln(2)
  // Lumbroso's Fast Dice Roller.
  x = 1
  y = 0
  nextBit = modBits
  rngv = 0
  while true // until a value is returned
    if nextBit >= modBits
      nextBit = 0
      rngv = NEXTRAND()
    end
    x = x * 2
    y = y * 2 + rem(rngv, 2)
    rngv = floor(rngv / 2)
    nextBit = nextBit + 1
    if x > maxInclusive
      if y <= maxInclusive: return y
      x = x - maxInclusive - 1
      y = y - maxInclusive - 1
    end
  end
END METHOD

METHOD RNDINT(maxInclusive)
  // maxInclusive must be 0 or greater
  if maxInclusive < 0: return error
  if maxInclusive == 0: return 0
  if maxInclusive == MODULUS - 1: return NEXTRAND()
  // NOTE: Finds the number of bits minus 1 needed
  // to represent MODULUS (if it's a power of 2).
  // This will be a constant here, though.
  modBits=ln(MODULUS)/ln(2)
  if floor(modBits) == modBits // Is an integer
    return RndIntHelperPowerOfTwo(maxInclusive)
  else
    return RndIntHelperNonPowerOfTwo(maxInclusive)
  end
END METHOD

There are many algorithms for RNDINT(maxInclusive), as shown in the table below, but in general, the algorithm can be unbiased only if it runs forever in the worst case. The algorithms listed take n as a parameter, where n = maxInclusive + 1, and thus sample from the interval [0, n). (The column "Unbiased?" means whether the algorithm generates random integers without bias, even if n is not a power of 2.)

Algorithm Optimal? Unbiased? Time Complexity
Rejection sampling: Sample in a bigger range until a sampled number fits the smaller range. Not always Yes Runs forever in worst case
Multiply-and-shift reduction: Generate bignumber, an integer made of k unbiased bits, where k is much greater than n, then find (bignumber * n) >> k (see (Lemire 2016)(5), (Lemire 2018)(6), and the "Integer Multiplication" algorithm surveyed by M. O'Neill). No No Constant
Modulo reduction: Generate bignumber as above, then find rem(bignumber, n). No No Constant
Fast Dice Roller (Lumbroso 2013)(7) (see pseudocode above). Yes Yes Runs forever in worst case
Math Forum (2004)(8) or (Mennucci 2018)(9) (batching/recycling random bits). Yes Yes Runs forever in worst case
"FP Multiply" surveyed by M. O'Neill. No No Constant
Algorithm in "Conclusion" section by O'Neill. No Yes Runs forever in worst case
"Debiased" and "Bitmask with Rejection" surveyed by M. O'Neill. No Yes Runs forever in worst case

Notes:

  1. RNDINT as a binary tree walker. Donald E. Knuth and Andrew C. Yao (1976)(10) showed that any algorithm that generates random integers using random unbiased bits (each bit is 0 or 1 with equal probability) can be described as a binary tree (also known as a DDG tree or discrete distribution generating tree). (This also applies to RNDINT algorithms.) Random unbiased bits trace a path in this tree, and each leaf (terminal node) in the tree represents an outcome. In the case of RNDINT(maxInclusive), there are n = maxInclusive + 1 outcomes that each occur with probability 1/n.
    Knuth and Yao showed that any optimal DDG tree algorithm needs at least log2(n) and at most log2(n) + 2 bits on average (where log2(x) = ln(x)/ln(2)).(11) But as they also showed, for the algorithm to be unbiased (exact), it must run forever in the worst case, even if it uses few random bits on average (that is, there is no way in general to "fix" this worst case while remaining unbiased). This is because 1/n will have an infinite run of base-2 digits except when n is a power of 2, so that the resulting DDG tree will have to either be infinitely deep, or include "rejection leaves" at the end of the tree.
    For instance, the modulo reduction method can be represented by a DDG tree in which rejection leaves are replaced with labeled outcomes, but the method is biased because only some outcomes can replace rejection leaves this way. For the same reason, stopping the rejection sampler after a fixed number of tries likewise leads to bias. However, which outcomes are biased this way depends on the algorithm.
  2. Reducing "bit waste". Any algorithm that chooses integers at random, including RNDINT, needs at least log2(n) bits per chosen integer on average, as noted above, but most of them use many more. There are various ways to bring an algorithm closer to log2(n). They include batching, bit recycling, and randomness extraction, and they are discussed, for example, in the Math Forum page and the Lumbroso and Mennucci papers referenced above, and in Devroye and Gravel (2020, section 2.3)(12). Batching example: To generate three digits from 0 through 9, we can call RNDINT(999) to generate an integer in [0, 999], then break the number it returns into three digits.
  3. Simulating dice. If we have a (virtual) fair p-sided die, how can we use it to simulate rolls of a k-sided die? This can't be done without "wasting" randomness, unless "every prime number dividing k also divides p" (see "Simulating a dice with a dice" by B. Kloeckner, 2008). However, randomness extraction (see my Note on Randomness Extraction) can turn die rolls into unbiased bits, so that the discussion earlier in this section applies.

RNDINTRANGE: Random Integers in [N, M]

The naïve way of generating a random integer in the interval [minInclusive, maxInclusive], shown below, works well for nonnegative integers and arbitrary-precision integers.

 METHOD RNDINTRANGE(minInclusive, maxInclusive)
   // minInclusive must not be greater than maxInclusive
   if minInclusive > maxInclusive: return error
   return minInclusive + RNDINT(maxInclusive - minInclusive)
 END METHOD

The naïve approach won't work as well, though, if the integer format can express negative and nonnegative integers and the difference between maxInclusive and minInclusive exceeds the highest possible integer for the format. For integer formats that can express—

  1. every integer in the interval [-1 - MAXINT, MAXINT] (e.g., Java int, short, or long), or
  2. every integer in the interval [-MAXINT, MAXINT] (e.g., Java float and double and .NET's implementation of System.Decimal),

where MAXINT is an integer greater than 0, the following pseudocode for RNDINTRANGE can be used.

METHOD RNDINTRANGE(minInclusive, maxInclusive)
   // minInclusive must not be greater than maxInclusive
   if minInclusive > maxInclusive: return error
   if minInclusive == maxInclusive: return minInclusive
   if minInclusive==0: return RNDINT(maxInclusive)
   // Difference does not exceed maxInclusive
   if minInclusive > 0 or minInclusive + MAXINT >= maxInclusive
       return minInclusive + RNDINT(maxInclusive - minInclusive)
   end
   while true // until a value is returned
     ret = RNDINT(MAXINT)
     // NOTE: For case 1, use the following line:
     if RNDINT(1) == 0: ret = -1 - ret
     // NOTE: For case 2, use the following three lines
     // instead of the preceding line; these lines
     // avoid negative zero
     // negative = RNDINT(1) == 0
     // if negative: ret = 0 - ret
     // if negative and ret == 0: continue
     if ret >= minInclusive and ret <= maxInclusive: return ret
   end
END METHOD

RNDINTEXC: Random Integers in [0, N)

RNDINTEXC(maxExclusive), which generates a random integer in the interval [0, maxExclusive), can be implemented as follows(13):

 METHOD RNDINTEXC(maxExclusive)
    if maxExclusive <= 0: return error
    return RNDINT(maxExclusive - 1)
 END METHOD

Note: RNDINTEXC is not given as the core random generation method because it's harder to fill integers in popular integer formats with random bits with this method.

RNDINTEXCRANGE: Random Integers in [N, M)

RNDINTEXCRANGE returns a random integer in the interval [minInclusive, maxExclusive). It can be implemented using RNDINTRANGE, as the following pseudocode demonstrates.

METHOD RNDINTEXCRANGE(minInclusive, maxExclusive)
   if minInclusive >= maxExclusive: return error
   if minInclusive >=0: return RNDINTRANGE(
      minInclusive, maxExclusive - 1)
   while true // until a value is returned
     ret = RNDINTRANGE(minInclusive, maxExclusive)
     if ret < maxExclusive: return ret
   end
END METHOD

Uniform Random Bits

The idiom RNDINT((1 << b) - 1) is a naïve way of generating a uniform random b-bit integer (with maximum 2b - 1).

In practice, memory is usually divided into bytes, or 8-bit integers in the interval [0, 255]. In this case, a block of memory can be filled with random bits—

Examples of Using the RNDINT Family

  1. To choose either −1 or 1 with equal probability (the Rademacher distribution), one of the following idioms can be used: (RNDINT(1) * 2 - 1) or (RNDINTEXC(2) * 2 - 1).
  2. To generate a random integer that's divisible by a positive integer (DIV), generate the integer with any method (such as RNDINT), let X be that integer, then generate X - rem(X, DIV) if X >= 0, or X - (DIV - rem(abs(X), DIV)) otherwise. (Depending on the method, the resulting integer may be out of range, in which case this procedure is to be repeated.)
  3. A random 2-dimensional point on an N×M grid can be expressed as a single integer as follows:
    • To generate a random N×M point P, generate P = RNDINT(N * M - 1) (P is thus in the interval [0, N * M)).
    • To convert a point P to its 2D coordinates, generate [rem(P, N), floor(P / N)]. (Each coordinate starts at 0.)
    • To convert 2D coordinates coord to an N×M point, generate P = coord[1] * N + coord[0].
  4. To simulate rolling an N-sided die (N greater than 1): RNDINTRANGE(1, N), which chooses a number in the interval [1, N] with equal probability.
  5. To generate a random integer with one base-10 digit: RNDINTRANGE(0, 9).
  6. To generate a random integer with N base-10 digits (where N is 2 or greater), where the first digit can't be 0: RNDINTRANGE(pow(10, N-1), pow(10, N) - 1).
  7. To choose a number in the interval [mn, mx), with equal probability, in increments equal to step: mn+step*RNDINTEXC(ceil((mx-mn)/(1.0*step))).
  8. To choose an integer in the interval [0, X) at random:
    • And favor numbers in the middle: floor((RNDINTEXC(X) + RNDINTEXC(X)) / 2).
    • And favor high numbers: max(RNDINTEXC(X), RNDINTEXC(X)).
    • And favor low numbers: min(RNDINTEXC(X), RNDINTEXC(X)).
    • And strongly favor high numbers: max(RNDINTEXC(X), RNDINTEXC(X), RNDINTEXC(X)).
    • And strongly favor low numbers: min(RNDINTEXC(X), RNDINTEXC(X), RNDINTEXC(X)).

Randomization Techniques

This section describes commonly used randomization techniques, such as shuffling, selection of several unique items, and creating random strings of text.

Boolean (True/False) Conditions

To generate a condition that is true at the specified probabilities, use the following idioms in an if condition:

The following helper method generates 1 with probability x/y and 0 otherwise:

METHOD ZeroOrOne(x,y)
  if RNDINTEXC(y)<x: return 1
  return 0
END METHOD

The method can also be implemented in the following way (as pointed out by Lumbroso (2013, Appendix B)(7)):

// NOTE: Modified from Lumbroso
// Appendix B to add 'z==0' and error checks
METHOD ZeroOrOne(x,y)
  if y <= 0: return error
  if x==y: return 1
  z = x
  while true // until a value is returned
    z = z * 2
    if z >= y
      if RNDINT(1) == 0: return 1
      z = z - y
    else if z == 0 or RNDINT(1) == 0: return 0
  end
END METHOD

Note: Probabilities can be rational or irrational numbers. Rational probabilities are of the form n/d and can be simulated with ZeroOrOne above. Irrational probabilities (such as exp(-x/y) or ln(2)) have an infinite digit expansion (0.ddddd....), and they require special algorithms to simulate; see "Algorithms for Irrational Constants" in my page on Bernoulli Factory algorithms.

Examples:

Random Sampling

This section contains ways to choose one or more items from among a collection of them, where each item in the collection has the same chance to be chosen as any other. This is called random sampling and can be done with replacement or without replacement.

Sampling With Replacement: Choosing a Random Item from a List

Sampling with replacement essentially means taking a random item and putting it back. To choose a random item from a list—

Sampling Without Replacement: Choosing Several Unique Items

Sampling without replacement essentially means taking a random item without putting it back. There are several approaches for doing a uniform random choice of k unique items or values from among n available items or values, depending on such things as whether n is known and how big n and k are.

  1. If n is not known in advance: Use the reservoir sampling method; see the RandomKItemsFromFile method, in pseudocode given later.
  2. If n is relatively small (for example, if there are 200 available items, or there is a range of numbers from 0 through 200 to choose from):
    • If items have to be chosen from a list in relative (index) order, or if n is 1, then use RandomKItemsInOrder (given later).
    • Otherwise, if the order of the sampled items is unimportant, and each item can be derived from its index (the item's position as an integer starting at 0) without looking it up in a list: Use the RandomKItemsFromFile method.(14)
    • Otherwise, if k is much smaller than n, proceed as in item 3 instead.
    • Otherwise, any of the following will choose k items in random order:
      • Store all the items in a list, shuffle that list, then choose the first k items from that list.
      • If the items are already stored in a list and the list's order can be changed, then shuffle that list and choose the first k items from the shuffled list.
      • If the items are already stored in a list and the list's order can't be changed, then store the indices to those items in another list, shuffle the latter list, then choose the first k indices (or the items corresponding to those indices) from the latter list.
  3. If k is much smaller than n and the order of the items must be random or is unimportant:
    1. If the items are stored in a list whose order can be changed: Do a partial shuffle of that list, then choose the last k items from that list. A partial shuffle proceeds as given in the section "Shuffling", except the partial shuffle stops after k swaps have been made (where swapping one item with itself counts as a swap).
    2. Otherwise: Create another empty list newlist, and create a key/value data structure such as a hash table. Then, for each integer i in the interval [0, k−1], do j = RNDINTEXC(n-i); AddItem(newlist, HGET(j,j)); HSET(j,HGET(n-i-1,n-i-1)), where HSET(k,v) sets the item with key k in the hash table to v, and HGET(k,v) gets the item with key k in that table, or returns v if there is no such item (Ting 2021)(15). The new list stores the indices to the chosen items, in random order.
  4. If n - k is much smaller than n, the items are stored in a list, and the order of the sampled items is unimportant:
    1. If the list's order can be changed: Do a partial shuffle of that list, except that n-k rather than k swaps are done, then choose the first k items from that list. (Note 5 in "Shuffling" can't be used.)
    2. Otherwise, if n is not very large (for example, less than 5000): Store the indices to those items in another list, do a partial shuffle of the latter list, except that n-k rather than k swaps are done, then choose the first k indices (or the items corresponding to those indices) from the latter list. (Note 5 in "Shuffling" can't be used.)
    3. Otherwise: Proceed as in item 5 instead.
  5. Otherwise (for example, if 32-bit or larger integers will be chosen so that n is 232, or if n is otherwise very large):

    • If the items have to be chosen in relative (index) order: Let n2 = floor(n/2). Generate h = Hypergeometric(k, n2, n). Sample h integers in relative order from the list [0, 1, ..., n2 - 1] by doing a recursive run of this algorithm (items 1 to 5), then sample k - h integers in relative order from the list [n2, n2 + 1, ..., n - 1] by running this algorithm recursively. The integers chosen this way are the indices to the desired items in relative (index) order (Sanders et al. 2019)(16).
    • Otherwise, create a data structure to store the indices to items already chosen. When a new index to an item is randomly chosen, add it to the data structure if it's not already there, or if it is, choose a new random index. Repeat this process until k indices were added to the data structure this way. Examples of suitable data structures are—

      • a hash table,
      • a compressed bit set (e.g, "roaring bitmap", EWAH), and
      • a self-sorting data structure such as a red–black tree, if the random items are to be retrieved in sorted order or in index order.

      Many applications require generating "unique random" values to identify database records or other shared resources, among other reasons. For ways to generate such values, see my recommendation document.

Shuffling

The Fisher–Yates shuffle method shuffles a list (puts its items in a random order) such that all permutations (arrangements) of that list are equally likely to occur. However, that method is also easy to write incorrectly — see also (Atwood 2007)(17). The following pseudocode is designed to shuffle a list's contents.

METHOD Shuffle(list)
   // NOTE: Check size of the list early to prevent
   // `i` from being less than 0 if the list's size is 0 and
   // `i` is implemented using an nonnegative integer
   // type available in certain programming languages.
   if size(list) >= 2
      // Set i to the last item's index
      i = size(list) - 1
      while i > 0
         // Choose an item ranging from the first item
         // up to the item given in `i`. Note that the item
         // at i+1 is excluded.
         j = RNDINTEXC(i + 1)
         // Swap item at index i with item at index j;
         // in this case, i and j may be the same
         tmp = list[i]
         list[i] = list[k]
         list[k] = tmp
         // Move i so it points to the previous item
         i = i - 1
      end
   end
   // NOTE: An implementation can return the
   // shuffled list, as is done here, but this is not required.
   return list
END METHOD

Notes:

  1. j = RNDINTEXC(i + 1) can't be replaced with j = RNDINTEXC(size(list)) since it introduces biases. If that line is replaced with j = RNDINTEXC(i), the result is Sattolo's algorithm (which chooses from among permutations with cycles), rather than a Fisher–Yates shuffle.
  2. When it comes to shuffling, the choice of pseudorandom number generator (or whatever is simulating a "source of random numbers") is important; see my recommendation document on shuffling.
  3. A shuffling algorithm that can be carried out in parallel is described in (Bacher et al., 2015)(18).
  4. A derangement is a permutation where every item moves to a different position. A random derangement can be generated as follows (Merlini et al. 2007)(19): (1) modify Shuffle by adding the following line after j = RNDINTEXC(i + 1): if i == list[j]: return nothing, and changing while i > 0 to while i >= 0; (2) use the following pseudocode with the modified Shuffle method: while True; list = []; for i in 0...n: AddItem(list, i); s=Shuffle(list); if s!=nothing: return s; end.
  5. Ting (2021)(15) showed how to reduce the space complexity of shuffling via a hash table: Modify Shuffle as follows: (1) Create a hash table at the start of the method; (2) instead of the swap tmp = list[i]; list[i] = list[j]; list[j] = tmp, use the following: list[i] = HGET(j,j); HSET(k,HGET(i,i)); if k==i: HDEL(i), where HSET(k,v) sets the item with key k in the hash table to v; HGET(k,v) gets the item with key k in that table, or returns v if there is no such item; and HDEL(k) deletes the item with key k from that table. (The hash table can instead be any key/value map structure, including a red–black tree. This can be combined with note 4 to generate derangements, except replace list[j] in note 4 with HGET(j,j).)

Random Character Strings

To generate a random string of characters:

  1. Prepare a list of the letters, digits, and/or other characters the string can have. Examples are given later in this section.
  2. Build a new string whose characters are chosen at random from that character list. The method, shown in the pseudocode below, demonstrates this. The method samples characters at random with replacement, and returns a list of the sampled characters. (How to convert this list to a text string depends on the programming language and is outside the scope of this page.) The method takes two parameters: characterList is the list from step 1, and stringSize is the number of random characters.

 

METHOD RandomString(characterList, stringSize)
  i = 0
  newString = NewList()
  while i < stringSize
    // Choose a character from the list
    randomChar = characterList[RNDINTEXC(size(characterList))]
    // Add the character to the string
    AddItem(newString, randomChar)
    i = i + 1
  end
  return newString
END METHOD

The following are examples of character lists:

  1. For an alphanumeric string, or string of letters and digits, the characters can be the basic digits "0" to "9" (U+0030-U+0039, nos. 48-57), the basic upper case letters "A" to "Z" (U+0041-U+005A, nos. 65-90), and the basic lower case letters "a" to "z" (U+0061-U+007A, nos. 96-122), as given in the Unicode Standard.
  2. For a base-10 digit string, the characters can be the basic digits only.
  3. For a base-16 digit (hexadecimal) string, the characters can be the basic digits as well as the basic letters "A" to "F" or "a" to "f" (not both).

Notes:

  1. If the list of characters is fixed, the list can be created in advance at runtime or compile time, or (if every character takes up the same number of code units) a string type as provided in the programming language can be used to store the list as a string.
  2. Unique random strings: Generating character strings that are not only random, but also unique, can be done by storing a list (such as a hash table) of strings already generated and checking newly generated strings against that list. However, if the unique values will identify something, such as database records or user accounts, an application may care about the choice of PRNG (or other device or program that simulates a "source of random numbers"), so that using random unique values might not be best; see my recommendation document.
  3. Word generation: This technique could also be used to generate "pronounceable" words, but this is less flexible than other approaches; see also "Markov Chains".

Pseudocode for Random Sampling

The following pseudocode implements two methods:

  1. RandomKItemsFromFile implements reservoir sampling; it chooses up to k random items from a file of indefinite size (file). Although the pseudocode refers to files and lines, the technique works whenever items are retrieved one at a time from a data set or list whose size is not known in advance. In the pseudocode, ITEM_OUTPUT(item, thisIndex) is a placeholder for code that returns the item to store in the list; this can include the item's value, its index starting at 0, or both.
  2. RandomKItemsInOrder returns a list of up to k random items from the given list (list), in the order in which they appeared in the list. It is based on a technique presented in (Devroye 1986)(20), p. 620.

 

METHOD RandomKItemsFromFile(file, k)
  list = NewList()
  j = 0
  index = 0
  while true // until a value is returned
    // Get the next line from the file
    item = GetNextLine(file)
    thisIndex = index
    index = index + 1
    // If the end of the file was reached, break
    if item == nothing: break
    // NOTE: The following line is OPTIONAL
    // and can be used to choose only random lines
    // in the file that meet certain criteria,
    // expressed as MEETS_CRITERIA below.
    // ------
    // if not MEETS_CRITERIA(item): continue
    // ------
    if j < k // phase 1 (fewer than k items)
      AddItem(list, ITEM_OUTPUT(item, thisIndex))
      j = j + 1
    else // phase 2
      j = RNDINT(thisIndex)
      if j < k: list[j] = ITEM_OUTPUT(item, thisIndex)
    end
  end
  // NOTE: Shuffle at the end in case k or
  // fewer lines were in the file, since in that
  // case the items would appear in the same
  // order as they appeared in the file
  // if the list weren't shuffled.  This line
  // can be removed, however, if the order of
  // the items in the list is unimportant.
  if size(list)>=2: Shuffle(list)
  return list
end

METHOD RandomKItemsInOrder(list, k)
  n = size(list)
  // Special case if k is 1
  if k==1: return [list[RNDINTEXC(n)]]
  i = 0
  kk = k
  ret = NewList()
  while i < n and size(ret) < k
    u = RNDINTEXC(n - i)
    if u <= kk
      AddItem(ret, list[i])
      kk = kk - 1
    end
    i = i + 1
  end
  return ret
END METHOD

Examples:

  1. Removing k random items from a list of n items (list) is equivalent to generating a new list by RandomKItemsInOrder(list, n - k).
  2. Filtering: If an application needs to sample the same list (with or without replacement) repeatedly, but only from among a selection of that list's items, it can create a list of items it wants to sample from (or a list of indices to those items), and sample from the new list instead.(21) This won't work well, though, for lists of indefinite or very large size.

Rejection Sampling

Rejection sampling is a simple and flexible approach for generating random content that meets certain requirements. To implement rejection sampling:

  1. Generate the random content (such as a number or text string) by any method and with any distribution and range.
  2. If the content doesn't meet predetermined criteria, go to step 1.

Example criteria include checking—

(KD-trees, hash tables, red–black trees, prime-number testing algorithms, and regular expressions are outside the scope of this document.)

Notes:

  1. The running time for rejection sampling depends on the acceptance rate, that is, how often the sampler accepts a sampled outcome rather than rejecting it. In general, this rate is the number of acceptable outcomes divided by the total number of outcomes.
  2. All rejection sampling strategies have a chance to reject data, so they all have a variable running time (in fact, they could run indefinitely). Note that graphics processing units (GPUs) and other devices that run multiple tasks at once work better if all the tasks finish their work at the same time. This is not possible if they all implement a rejection sampling technique because of its variable running time. If each iteration of the rejection sampler has a low rejection rate, one solution is to have each task run one iteration of the sampler, with its own "source of random numbers" (such as numbers generated from its own PRNG), then to take the first sampled number that hasn't been rejected this way by a task (which can fail at a very low rate).(22)

Random Walks

A random walk is a process with random behavior over time. A simple form of random walk involves choosing, at random, a number that changes the state of the walk. The pseudocode below generates a random walk with n steps, where STATEJUMP() is the next number to add to the current state (see examples later in this section).

METHOD RandomWalk(n)
  // Create a new list with an initial state
  list=[0]
  // Add 'n' new numbers to the list.
  for i in 0...n: AddItem(list, list[i] + STATEJUMP())
  return list
END METHOD

Notes:

  1. A white noise process is simulated by creating a list of independent random variates generated in the same way. Such a process generally models behavior over time that does not depend on the time or the current state. One example is ZeroOrOne(px,py) (for modeling a Bernoulli process, where each number is 0 or 1 depending on the probability px/py).
  2. A useful reference here is De Bruyne et al. (2021)(23).

Examples:

  1. If STATEJUMP() is RNDINT(1) * 2 - 1, the random walk generates numbers that each differ from the last by -1 or 1, chosen at random.
  2. If STATEJUMP() is ZeroOrOne(px,py) * 2 - 1, the random walk generates numbers that each differ from the last by -1 or 1 depending on the probability px/py.
  3. Binomial process: If STATEJUMP() is ZeroOrOne(px,py), the random walk advances the state with probability px/py.

Random Dates and Times

Pseudocode like the following can be used to choose a random date-and-time bounded by two dates-and-times (date1, date2): dtnum1 = DATETIME_TO_NUMBER(date1); dtnum2 = DATETIME_TO_NUMBER(date2); num = RNDINTRANGE(date1, date2); result = NUMBER_TO_DATETIME(num).

In that pseudocode, DATETIME_TO_NUMBER and NUMBER_TO_DATETIME convert a date-and-time to or from a number, respectively, at the required granularity, for instance, month, day, or hour granularity (the details of such conversion depend on the date-and-time format and are outside the scope of this document). Instead of RNDINTRANGE(date1, date2), any other random selection strategy can be used.

Randomization in Statistical Testing

Statistical testing uses shuffling and bootstrapping to help draw conclusions on data through randomization.

After creating the simulated data sets, one or more statistics, such as the mean, are calculated for each simulated data set as well as the original data set, then the statistics for the simulated data sets are compared with those of the original (such comparisons are outside the scope of this document).

Markov Chains

A Markov chain models one or more states (for example, individual letters or syllables), and stores the probabilities to transition from one state to another (e.g., "b" to "e" with a chance of 20 percent, or "b" to "b" with a chance of 1 percent). Thus, each state can be seen as having its own list of weights for each relevant state transition (see "Weighted Choice With Replacement). For example, a Markov chain for generating "pronounceable" words, or words similar to natural-language words, can include "start" and "stop" states for the start and end of the word, respectively.

An algorithm called coupling from the past (Propp and Wilson 1996)(25) can sample a state from a Markov chain's stationary distribution, that is, the chain's steady state, by starting multiple chains at different states and running them until they all reach the same state at the same time. However, stopping the algorithm early can introduce bias unless precautions are taken (Fill 1998)(26). The algorithm works correctly if the chain has a finite number of states and is not periodic, and each state is reachable from every other.

The following pseudocode implements coupling from the past. In the method, StateCount is the number of states in the Markov chain, UPDATE(chain, state, random) transitions the Markov chain to the next state given the current state and random variates, and RANDOM() generates one or more random variates needed by UPDATE.

METHOD CFTP(chain)
   states=[]
   numstates=StateCount(chain)
   done=false
   randoms=[]
   while not done
      // Start multiple chains at different states.  NOTE:
      // If the chain is monotonic (meaning the states
      // are ordered and, whenever state A is less
      // than state B, A's next state is never higher than
      // B's next state), then just two chains can be
      // created instead, starting
      // at the first and last state, respectively.
      for i in 0...numstates: AddItem(states, i)
      // Update each chain with the same randomness
      AddItem(randoms, RANDOM())
      for k in 0...size(randoms):
         for i in 0...numstates: states[i]=
            UPDATE(chain, states[i], randoms[size(randoms)-1-k])
      end
      // Stop when all states are the same
      fs=states[0]
      done=true
      for i in 1...numstates: done=(done and states[i]==fs)
   end
   return states[0] // Return the steady state
END METHOD

Random Graphs

A graph is a listing of points and the connections between them. The points are called vertices and the connections, edges.

A convenient way to represent a graph is an adjacency matrix. This is an n×n matrix with n rows and n columns (signifying n vertices in the graph). For simple graphs, an adjacency matrix contains only 1s and 0s — for the cell at row r and column c, a 1 means there is an edge pointing from vertex r to vertex c, and a 0 means there are none.

In this section, Zeros(n) creates an n×n zero matrix (such as a list consisting of n lists, each of which contains n zeros).

The following method generates a random n-vertex graph that follows the model G(n, p) (also known as the Gilbert model (Gilbert 1959)(27)), where each edge is drawn with probability px/py (Batagelj and Brandes 2005)(28).

METHOD GNPGraph(n, px, py)
    graph=Zeros(n)
    for i in 2...n
       j = i
       while j > 0
          j = j - 1 - min(NegativeBinomialInt(1, px, py), j - 1)
          if j > 0
             // Build an edge
             graph[i][j]=1
             graph[j][i]=1
          end
       end
    end
    return graph
END METHOD

Other kinds of graphs are possible, including Erdős–Rényi graphs (choose m random edges without replacement, given an n×n adjacency matrix), Chung–Lu graphs, preferential attachment graphs, and more. For example, a mesh graph is a graph in the form of a rectangular mesh, where vertices are the corners and edges are the sides of the mesh's rectangles. A random maze is a random spanning tree (Costantini 2020)(29) of a mesh graph. Penschuck et al. (2020)(30) give a survey of random graph generation techniques.

A Note on Sorting Random Variates

In general, sorting random variates is no different from sorting any other data. (Sorting algorithms are outside this document's scope.) (31)

General Non-Uniform Distributions

Some applications need to choose random values such that some of them are more likely to be chosen than others (a non-uniform distribution). Most of the techniques in this section show how to use the uniform random integer methods to generate such random values.

Weighted Choice

The weighted choice method generates a random item or number from among a collection of them with separate probabilities of each item or number being chosen. There are several kinds of weighted choice.

Weighted Choice With Replacement

The first kind is called weighted choice with replacement (which can be thought of as drawing a ball, then putting it back) or a categorical distribution, where the probability of choosing each item doesn't change as items are chosen. In the following pseudocode:

 

METHOD WChoose(weights, value)
    // Choose the index according to the given value
    runningValue = 0
    for i in 0...size(weights) - 1
       if weights[i] > 0
          newValue = runningValue + weights[i]
          // NOTE: Includes start, excludes end
          if value < newValue: break
          runningValue = newValue
       end
    end
    // Should not happen with integer weights
    return error
END METHOD

METHOD WeightedChoice(weights)
    return WChoose(weights,
        RNDINTEXC(Sum(weights)))
END METHOD

METHOD CumulativeWeightedChoice(weights)
    if size(weights)==0 or weights[0]!=0: return error
    value = RNDINTEXC(weights[size(weights) - 1])
    // Choose the index according to the given value
    for i in 0...size(weights) - 1
       // Choose only if difference is positive
       if weights[i] < weights[i+1] and
           weights[i]>=value: return i
    end
    return 0
END METHOD

METHOD NormalizeRatios(ratios)
  prod=1; gc=0
  for r in ratios: prod*=r[1] // Multiply denominators
  weights=[]
  for r in ratios
     rn = floor(r * prod)
     gc = gcd(rn, gc); AddItem(weights, rn)
  end
  if gc==0: return weights
  for i in 0...size(weights): weights[i]=floor(weights[i]/gc)
  return weights
END METHOD

The following are various ways to implement WeightedChoice. Many of them require using a special data structure.

Algorithm Notes
Linear search See the pseudocode for WeightedChoice above.
Linear search with cumulative weights Calculates a list of cumulative weights (also known as a cumulative distribution table or CDT), then generates, at random, a number less than the sum of (original) weights (which should be an integer if the weights are integers), then does a linear scan of the new list to find the first item whose cumulative weight exceeds the generated number.
Fast Loaded Dice Roller (Saad et al., 2020a)(32). Uses integer weights only, and samples using random bits ("fair coins"). This sampler comes within 6 bits, on average, of the optimal number of random bits per sample.
Samplers described in (Saad et al., 2020b)(33) Uses integer weights only, and samples using random bits. The samplers come within 2 bits, on average, of the optimal number of random bits per sample as long as the sum of the weights is of the form 2k or 2k − 2m.
Rejection sampling Given a list (weights) of n weights: (1) find the highest weight and call it max; (2) set i to RNDINT(n - 1); (3) With probability weights[i]/max (e.g., if ZeroOrOne(weights[i], max)==1 for integer weights), return i, and go to step 2 otherwise. (See, e.g., sec. 4 of the Fast Loaded Dice Roller paper, or the Tang or Klundert papers. weights[i] can also be a function that calculates the weight for i "on the fly"; in that case, max is the maximum value of weights[i] for every i.)
If the weights are instead log-weights (that is, each weight is ln(x)/ln(b) where x is the original weight and b is the logarithm base), step 3 changes to: "(3) If Expo(ln(b)) > max - weights[i] (which happens with probability pow(b, -(max - weights[i]))), return i, and go to step 2 otherwise."
If the weights are instead "coins", each with a separate but unknown probability of heads, the algorithm is also called Bernoulli race (Dughmi et al. 2017)(34); see also (Morina et al., 2019)(35): (1) set i to RNDINT(n - 1); (2) flip coin i (the first coin is 0, the second is 1, and so on), then return i if it returns 1 or heads, or go to step 1 otherwise.
Bringmann and Panagiotou (2012/2017)(36). Shows a sampler designed to work on a sorted list of weights.
Alias method (Walker 1977)(37) Michael Vose's version of the alias method (Vose 1991)(38) is described in "Darts, Dice, and Coins: Sampling from a Discrete Distribution". Weights should be rational numbers.
(Klundert 2019)(39) Various data structures, with emphasis on how they can support changes in weights.
The Bringmann–Larsen succinct data structure (Bringmann and Larsen 2013)(40) Uses rejection sampling if the sum of weights is large, and a compressed structure otherwise.
Hübschle-Schneider and Sanders (2019)(41). Parallel weighted random samplers.
(Tang 2019)(42). Presents various algorithms, including two- and multi-level search, binary search (with cumulative weights), and a new "flat" method.
"Loaded Die from Biased Coins" Given a list of probabilities probs that must sum to 1 and should be rational numbers: (1) Set cumu to 1 and i to 0; (2) with probability probs[i]/cumu, return i; (3) subtract probs[i] from cumu, then add 1 to i, then go to step 2. For a correctness proof, see "Darts, Dice, and Coins". If each probability in probs is calculated "on the fly", this is also called sequential search; see chapter 10 of Devroye (1986)(20) (but this generally won't be exact unless all the probabilities involved are rational numbers).
Knuth and Yao (1976)(10) Generates a binary DDG tree from the binary expansions of the probabilities (that is, they have the base-2 form 0.bbbbbb... where b is 0 or 1). Comes within 2 bits, on average, of the optimal number of random bits per sample. This is suggested in exercise 3.4.2 of chapter 15 of Devroye (1986)(20), implemented in randomgen.py as the discretegen method, and also described in (Devroye and Gravel 2020)(12). discretegen can work with probabilities that are irrational numbers (which have infinite binary expansions) as long as there is a way to calculate the binary expansion "on the fly".
Han and Hoshi (1997)(43) Uses cumulative probabilities as input and comes within 3 bits, on average, of the optimal number of random bits per sample. Also described in (Devroye and Gravel 2020)(12).

Notes:

  1. Weighted choice algorithms as binary tree walkers. Just like RNDINT algorithms (see the RNDINT section), weighted choice algorithms can all be described as random walks on a binary DDG tree. In this case, though, the probabilities are not necessarily uniform, and on average, the algorithm needs at least as many random bits as the sum of binary entropies of all the probabilities involved. For example, say we give the four integers 1, 2, 3, 4 the following weights: 3, 15, 1, 2. The binary entropies of these weights add up to 0.4010... + 0.3467... + 0.2091... + 0.3230... = 1.2800... (because the sum of the weights is 21 and the binary entropy of 3/21 is 0 - (3/21) * log2(3/21) = 0.4010..., and so on for the other weights). Thus, any weighted sampler will require at least 1.2800... bits on average to generate a number with probability proportional to these weights.(11) The note "Reducing 'bit waste'" from the RNDINT section also applies here.
  2. For best results, weights passed to the algorithms in the table above should first be converted to integers (e.g., using NormalizeRatios in the pseudocode above), or rational numbers when indicated. (Obviously, if the weights were originally irrational numbers, this conversion will be lossy and the algorithm won't be exact, unless noted otherwise in the table.) Also, using floating-point numbers in the algorithms can introduce unnecessary rounding errors, so such numbers should be avoided.
  3. The Python sample code contains a variant of the WeightedChoice pseudocode for generating multiple random points in one call.

Examples:

  1. Assume we have the following list: ["apples", "oranges", "bananas", "grapes"], and weights is the following: [3, 15, 1, 2]. The weight for "apples" is 3, and the weight for "oranges" is 15. Since "oranges" has a higher weight than "apples", the index for "oranges" (1) is more likely to be chosen than the index for "apples" (0) with the WeightedChoice method. The following idiom implements how to get a randomly chosen item from the list with that method: item = list[WeightedChoice(weights)].
  2. Example 1 can be implemented with CumulativeWeightedChoice instead of WeightedChoice if weights is the following list of cumulative weights: [0, 3, 18, 19, 21].
  3. Piecewise constant distribution. Assume the weights from example 1 are used and the list contains the following: [0, 5, 10, 11, 13] (one more item than the weights). This expresses four intervals: [0, 5), [5, 10), and so on. Choose a random index (and thus interval) with index = WeightedChoice(weights). Then independently, choose a number in the chosen interval uniformly at random (for example, code like the following chooses a random integer this way: number = RNDINTEXCRANGE(list[index], list[index + 1])).

Weighted Choice Without Replacement (Multiple Copies)

To implement weighted choice without replacement (which can be thought of as drawing a ball without putting it back) when each weight is an integer 0 or greater, generate an index by WeightedChoice, and then decrease the weight for the chosen index by 1. In this way, each weight behaves like the number of "copies" of each item. The pseudocode below is an example of this. It assumes 'weights' is a list that can be modified.

totalWeight = Sum(weights)
// Choose as many items as the sum of weights
i = 0
items = NewList()
while i < totalWeight
    index = WeightedChoice(weights)
    // Decrease weight by 1 to implement selection
    // without replacement.
    weights[index] = weights[index] - 1
    // NOTE: Assumes the 'list' of items was declared
    // earlier and has at least as many items as 'weights'
    AddItem(items, list[index])
    i = i + 1
end

Alternatively, if all the weights are integers 0 or greater and their sum is relatively small, create a list with as many copies of each item as its weight, then shuffle that list. The resulting list will be ordered in a way that corresponds to a weighted random choice without replacement.

Note: Some applications (particularly some games) wish to control which random outcomes appear, to make those outcomes appear "fairer" to users (e.g., to avoid long streaks of good outcomes or of bad outcomes). When the weighted sampling in this section is used for this purpose, each item represents a different outcome (e.g., "good" or "bad"), and the lists are replenished once no further items can be chosen. However, this kind of sampling should be avoided in applications that care about information security, including when a player or user would have a significant and unfair advantage if the outcomes were easy to guess.

Weighted Choice Without Replacement (Single Copies)

The following are ways to implement weighted choice without replacement, where each item can be chosen no more than once at random. The weights have the property that higher-weighted items are more likely to appear first.

Weighted Choice Without Replacement (List of Unknown Size)

If the number of items in a list is not known in advance, then the following pseudocode implements a RandomKItemsFromFileWeighted that selects up to k random items from a file (file) of indefinite size (similarly to RandomKItemsFromFile). This is also known as weighted reservoir sampling. See (Efraimidis and Spirakis 2005)(44), and see also (Efraimidis 2015)(45), (Vieira 2014)(46), and (Vieira 2019)(47). In the pseudocode below:

 

METHOD RandomKItemsFromFileWeighted(file, k)
  queue=[] // Initialize priority queue
  index = 0
  while true // until a value is returned
    // Get the next line from the file
    item = GetNextLine(file)
    // If the end of the file was reached, break
    if item == nothing: break
    thisIndex = index
    index = index + 1
    weight = WEIGHT_OF_ITEM(item, thisIndex)
    // Ignore if item's weight is 0 or less
    if weight <= 0: continue
    // NOTE: Equivalent to Expo(weight)
    key = ExpoNew(weight)
    itemToStore = ITEM_OUTPUT(item, thisIndex, key)
    // Begin priority queue add operation.  Instead
    // of the item ('item'), the line number starting at one
    // ('thisIndex') can be added to the queue.
    if size(queue) < k // Fewer than k items
      AddItem(queue, [key, itemToStore])
      Sort(queue)
    else
      // Check whether this key is smaller
      // than the largest key in the queue
      if ExpoLess(key, queue[size(queue)-1][0])
          // Replace the item with the largest key
          queue[size(queue)-1]=[key, itemToStore]
          Sort(queue)
      end
    end
    // End priority queue add operation
  end
  list=[]
  for v in 0...size(queue): AddItem(list, queue[v][1])
  return list
end

Note: Weighted choice with replacement can be implemented by doing one or more concurrent runs of RandomKItemsFromFileWeighted(file, 1) (making sure each run traverses file the same way for multiple runs as for a single run) (Efraimidis 2015)(45).

Weighted Choice with Inclusion Probabilities

For the weighted-choice-without-replacement methods given earlier, the weights have the property that higher-weighted items are chosen first, but each item's weight is not necessarily the chance that a given sample of n items will include that item (an inclusion probability). The following method chooses a random sample of n indices from a list of items (whose weights are given as weights), such that the chance that index k is in the sample is given as weights[k]*n/Sum(weights). The chosen indices will not necessarily be in random order. The method implements the splitting method found in pp. 73-74 in "Algorithms of sampling with equal or unequal probabilities".

METHOD InclusionSelect(weights, n)
  if n>size(weights): return error
  if n==0: return []
  ws=Sum(weights)
  wts=[]
  items=[]
  // Calculate inclusion probabilities
  for i in 0...size(weights):
     AddItem(wts,[MakeRatio(weights[i],ws)*n, i])
  Sort(wts)
  // Check for invalid inclusion probabilities
  if wts[size(wts)-1][0]>1: return error
  last=size(wts)-n
  if n==size(wts)
    for i in 0...n: AddItem(items,i)
    return items
  end
  while true // until a value is returned
    lamda=min(MakeRatio(1,1)-wts[last-1][0],wts[last][0])
    if lamda==0: return error
    if ZeroOrOne(lamda[0],lamda[1])
      for k in 0...size(wts)
        if k+1>ntotal-n:AddItem(items,wts[k][1])
      end
      return items
    end
    newwts=[]
    for k in 0...size(wts)
      newwt=(k+1<=last) ?
          wts[k][0]/(1-lamda) : (wts[k][0]-lamda)/(1-lamda)
      AddItem(newwts,[newwt,wts[k][1]])
    end
    wts=newwts
    Sort(wts)
  end
END METHOD

Mixtures of Distributions

A mixture consists of two or more probability distributions with separate probabilities of being sampled. To generate random content from a mixture—

  1. generate index = WeightedChoice(weights), where weights is a list of relative probabilities that each distribution in the mixture will be sampled, then
  2. based on the value of index, generate the random content from the corresponding distribution.

Examples:

  1. One mixture consists of the sum of three six-sided virtual die rolls and the result of one six-sided die roll, but there is an 80% chance to roll one six-sided virtual die rather than three. The following pseudocode shows how this mixture can be sampled: index = WeightedChoice([80, 20]); number = 0; if index==0: number = RNDINTRANGE(1,6); else: number = RNDINTRANGE(1,6) + RNDINTRANGE(1,6) + RNDINTRANGE(1,6).
  2. Choosing an independent uniform random point, from a complex shape (in any number of dimensions) is equivalent to doing such sampling from a mixture of simpler shapes that make up the complex shape (here, the weights list holds the n-dimensional "volume" of each simpler shape). For example, a simple closed 2D polygon can be triangulated, or decomposed into triangles, and a mixture of those triangles can be sampled.(48)
  3. Take a set of nonoverlapping integer ranges (for example, [0, 5], [7, 8], [20, 25]). To choose an independent uniform random integer from those ranges:

    • Create a list (weights) of weights for each range. Each range is given a weight of (mx - mn) + 1, where mn is that range's minimum and mx is its maximum.
    • Choose an index using WeightedChoice(weights), then generate RNDINTRANGE(mn, mx), where mn is the corresponding range's minimum and mx is its maximum.

    This method can be adapted for rational numbers with a common denominator by treating the integers involved as the numerators for such numbers. For example, [0/100, 5/100], [7/100, 8/100], [20/100, 25/100], where the numerators are the same as in the previous example.

  4. In the pseudocode index = WeightedChoice([80, 20]); list = [[0, 5], [5, 10]]; number = RNDINTEXCRANGE(list[index][0], list[index][1]), a random integer in [0, 5) is chosen at an 80% chance, and a random integer in [5, 10) at a 20% chance.
  5. A hyperexponential distribution is a mixture of exponential distributions, each one with a separate weight and separate rate parameter. The following is an example involving three different exponential distributions: index = WeightedChoice([6, 3, 1]); rates = [[3, 10], [5, 10], [1, 100]]; number = ExpoRatio(100000, rates[i][0], rates[i][1]).

Transformations of Random Variates

Random variates can be generated by combining and/or transforming one or more random variates and/or discarding some of them.

As an example, "Probability and Games: Damage Rolls" by Red Blob Games includes interactive graphics showing score distributions for lowest-of, highest-of, drop-the-lowest, and reroll game mechanics.(49) These and similar distributions can be generalized as follows.

Generate one or more random variates (numbers), each with a separate probability distribution, then:

  1. Highest-of: Choose the highest generated variate.
  2. Drop-the-lowest: Add all generated variates except the lowest.
  3. Reroll-the-lowest: Add all generated variates except the lowest, then add a number generated randomly by a separate probability distribution.
  4. Lowest-of: Choose the lowest generated number.
  5. Drop-the-highest: Add all generated variates except the highest.
  6. Reroll-the-highest: Add all generated variates except the highest, then add a number generated randomly by a separate probability distribution.
  7. Sum: Add all generated variates.
  8. Mean: Add all generated variates, then divide the sum by the number of variates.
  9. Geometric transformation: Treat the variates as an n-dimensional point, then apply a geometric transformation, such as a rotation or other affine transformation(50), to that point.

If the probability distributions are the same, then strategies 1 to 3 make higher numbers more likely, and strategies 4 to 6, lower numbers.

Note: Variants of strategy 4 — e.g., choosing the second-, third-, or nth-lowest number — are formally called second-, third-, or nth-order statistics distributions, respectively.

Examples:

  1. The idiom min(RNDINTRANGE(1, 6), RNDINTRANGE(1, 6)) takes the lowest of two six-sided die results (strategy 4). Due to this approach, 1 is more likely to occur than 6.
  2. The idiom RNDINTRANGE(1, 6) + RNDINTRANGE(1, 6) takes the result of two six-sided dice (see also "Dice") (strategy 7).
  3. A binomial distribution models the sum of n numbers each generated by ZeroOrOne(px,py) (strategy 7) (see "Binomial Distribution").
  4. A Poisson binomial distribution models the sum of n numbers each with a separate probability of being 1 as opposed to 0 (strategy 7). Given probs, a list of the n probabilities as rational numbers, the pseudocode is: for i in 0...n: x=x+ZeroOrOne(probs[i][0],probs[i][1]); return x.
  5. Clamped random variates. These are one example of transformed random variates. To generate a clamped random variate, generate a number at random as usual, then—

    • if that number is less than a minimum threshold, use the minimum threshold instead (left-censoring), and/or
    • if that number is greater than a maximum threshold, use the maximum threshold instead (right-censoring).

    An example of a clamped random variate is min(200, RNDINT(255)).

  6. A compound Poisson distribution models the sum of n numbers each chosen at random in the same way, where n follows a Poisson distribution (e.g., n = PoissonInt(10, 1) for an average of 10 numbers) (strategy 7, sum).
  7. A Pólya–Aeppli distribution is a compound Poisson distribution in which the numbers are generated by NegativeBinomial(1, 1-p)+1 for a fixed p.

Specific Non-Uniform Distributions

This section contains information on some of the most common non-uniform probability distributions.

Dice

The following method generates a random result of rolling virtual dice. It takes three parameters: the number of dice (dice), the number of sides in each die (sides), and a number to add to the result (bonus) (which can be negative, but the result of the method is 0 if that result is greater). See also Red Blob Games, "Probability and Games: Damage Rolls".

METHOD DiceRoll(dice, sides, bonus)
    if dice < 0 or sides < 1: return error
    ret = 0
    for i in 0...dice: ret=ret+RNDINTRANGE(1, sides)
    return max(0, ret + bonus)
END METHOD

Examples: The result of rolling—

Binomial Distribution

The binomial distribution uses two parameters: trials and p. This distribution models the number of successes in a fixed number of independent trials (equal to trials), each with the same probability of success (equal to p, where p <= 0 means never, p >= 1 means always, and p = 1/2 means an equal chance of success or failure).

This distribution has a simple implementation: count = 0; for i in 0...trials: count=count+ZeroOrOne(px, py). But for large numbers of trials, this can be very slow.

The pseudocode below implements an exact sampler of this distribution, with certain optimizations based on (Farach-Colton and Tsai 2015)(51). (Another exact sampler is given in (Bringmann et al. 2014)(52) and described in my "Miscellaneous Observations on Randomization".) Here, the parameter p is expressed as a ratio px/py.

 

METHOD BinomialInt(trials, px, py)
  if trials < 0: return error
  if trials == 0: return 0
  // Always succeeds
  if mx: return trials
  // Always fails
  if p <= 0.0: return 0
  count = 0
  ret = 0
  recursed = false
  if py*2 == px // Is half
    if i > 200
      // Divide and conquer
      half = floor(trials / 2)
      return BinomialInt(half, 1, 2) + BinomialInt(trials - half, 1, 2)
    else
      if rem(trials,2)==1
        count=count+RNDINT(1)
        trials=trials-1
      end
      // NOTE: This step can be made faster
      // by precalculating an alias table
      // based on a list of n + 1 binomial(1/2)
      // weights, which consist of n-choose-i
      // for every i in [0, n], and sampling based on
      // that table (see Farach-Colton and Tsai).
      for i in 0...trials: count=count+RNDINT(1)
    end
  else
    // Based on proof of Theorem 2 in Farach-Colton and Tsai.
    // Decompose px/py into its base-2 digits.
    pw = MakeRatio(px, py)
    pt = MakeRatio(1, 2)
    while trials>0 and pw>0
      c=BinomialInt(trials, 1, 2)
      if pw>=pt
        count=count+c
        trials=trials-c
        pw=pw-pt
      else
        trials=c
      end
      pt=pt/2 // NOTE: Not rounded
    end
  end
  if recursed: return count+ret
  return count
END METHOD

Note: If px/py is 1/2, the binomial distribution models the task "Flip N coins, then count the number of heads", and the random sum is known as Hamming distance (treating each trial as a "bit" that's set to 1 for a success and 0 for a failure). If px is 1, then this distribution models the task "Roll n py-sided dice, then count the number of dice that show the number 1."

Negative Binomial Distribution

In this document, the negative binomial distribution models the number of failing trials that happen before a fixed number of successful trials (successes). Each trial is independent and has a success probability of px/py (where 0 means never and 1 means always). The following is a naïve implementation; see also the notes for the geometric distribution, a special case of this one.

METHOD NegativeBinomialInt(successes, px, py)
    // Needs to be 0 or greater; px must not be 0
    if successes < 0 or px == 0: return error
    if successes == 0 or px >= py: return 0
    total = 0
    count = 0
    while total < successes
        if ZeroOrOne(px, py) == 1: total = total + 1
        else: count = count + 1
    end
    return count
END METHOD

If successes is a non-integer, the distribution is often called a Pólya distribution. In that case, it can be sampled using the following pseudocode (Heaukulani and Roy 2019)(53):

METHOD PolyaInt(sx, sy, px, py)
   isinteger=rem(sx,sy)==0
   sxceil=ceil(sx/sy)
   while true // until a value is returned
      w=NegativeBinomialInt(sxceil, px, py)
      if isinteger or w==0: return w
      tmp=MakeRatio(sx,sy)
      anum=tmp
      for i in 1...w: anum=anum*(tmp+i)
      tmp=sxceil
      aden=tmp
      for i in 1...w: aden=aden*(tmp+i)
      a=anum/aden
      if ZeroOrOne(a[0], a[1])==1: return w
   end
END METHOD

Geometric Distribution

The geometric distribution is a negative binomial distribution with successes = 1. In this document, a geometric random variate is the number of failures that have happened before one success happens. For example, if p is 1/2, the geometric distribution models the task "Flip a coin until you get tails, then count the number of heads." As a unique property of the geometric distribution, given that n trials have failed, the number of new failing trials has the same distribution (where n is an integer greater than 0).

Notes:

  1. The negative binomial and geometric distributions are defined differently in different works. For example, Mathematica's definition excludes the last success, but the definition in (Devroye 1986, p. 498)(20) includes it. And some works may define a negative binomial number as the number of successes before N failures, rather than vice versa.
  2. A bounded geometric random variate is either n (an integer greater than 0) or a geometric random variate, whichever is less. Exact and efficient samplers for the geometric and bounded geometric distributions are given in (Bringmann and Friedrich 2013)(54) and described in my "Miscellaneous Observations on Randomization".)

Exponential Distribution

The exponential distribution uses a parameter known as λ, the rate, or the inverse scale. Usually, λ is the probability that an independent event of a given kind will occur in a given span of time (such as in a given day or year), and the random result is the number of spans of time until that event happens. Usually, λ is equal to 1, or 1/1. 1/λ is the scale (mean), which is usually the average waiting time between two independent events of the same kind.

In this document, Expo(lamda) is an exponentially-distributed random variate with the rate lamda.

ExpoRatio, in the pseudocode below, generates an exponential random variate (in the form of a ratio) given the rate rx/ry (or scale ry/rx) and the base base. ExpoNumerator generates the numerator of an exponential random variate with rate 1 given that number's denominator. The algorithm is due to von Neumann (1951)(55). For additional algorithms to sample exponential random variates, see "Partially-Sampled Random Numbers".

METHOD ExpoRatio(base, rx, ry)
    // Generates a numerator and denominator of
    // an exponential random variate with rate rx/ry.
    return MakeRatio(ExpoNumerator(base*ry), base*rx))
END METHOD

METHOD ExpoNumerator(denom)
   if denom<=0: return error
   count=0
   while true // until a value is returned
      y1=RNDINTEXC(denom)
      y=y1
      accept=true
      while true // until we break
         z=RNDINTEXC(denom)
         if y<=z: break
         accept=not accept
         y=z
      end
      if accept: count=count+y1
      else: count=count+denom
      if accept: break
   end
   return count
END METHOD

Poisson Distribution

The Poisson distribution uses a parameter mean (also known as λ). λ is the average number of independent events of a certain kind per fixed unit of time or space (for example, per day, hour, or square kilometer). A Poisson-distributed number is the number of such events within one such unit.

In this document, Poisson(mean) is a Poisson-distributed number if mean is greater than 0, or 0 if mean is 0.

The following method generates a Poisson random variate with mean mx/my, using the approach suggested by (Flajolet et al., 2010)(56). In the method, UniformNew() creates a uniform partially-sampled random number (an "empty" real number whose contents are not yet determined) with a positive sign, an integer part of 0, and an empty fractional part (for more information, see my article on this topic), and UniformLess(a, b) returns whether one partially-sampled random number (a) is less than another (b), and samples unbiased random bits (e.g., RNDINT(1)) from both numbers as necessary (see the UniformLess algorithm in the same article). For a less exact algorithm, replace UniformNew() with RNDINT(1000) and UniformLess(a, b) with a < b.

METHOD PoissonInt(mx, my)
    if my == 0: return error
    if mx == 0: return 0
    if (mx < 0 and my < 0) or (mx > 0 and my < 0): return 0
    if mx==my: return PoissonInt(1,2)+PoissonInt(1,2)
    if mx > my
       // Mean is 1 or greater
       mm=rem(mx, my)
       if mm == 0
          mf=floor(mx/my)
          ret=0
          if rem(mf, 2)==0: ret=ret+PoissonInt(1, 1)
          if rem(mf, 2)==0: mf=mf-1
          ret=ret+PoissonInt(mf/2, 1)+PoissonInt(mf/2, 1)
          return ret
       else: return PoissonInt(mm, my)+
           PoissonInt(mx-mm, my)
    end
    if mx>floor(my*9/10)
       // Runs slowly if mx/my is close to 1, so break it up
       hmx=floor(mx/2)
       return PoissonInt(hmx,my)+PoissonInt(mx-hmx,my)
    end
    while true // until a value is returned
      k = 0
      w = nothing
      while true // until we break or return
         // Generate uniforms
         // and determine whether they are sorted
         if ZeroOrOne(mx,my)==0: return k
         u2 = UniformNew()
         // Break if we find an out-of-order number
         if k>0 and UniformLess(w, u): break
         w = u
         k=k+1
      end
    end
    return count
END METHOD

Note: To generate a sum of n independent Poisson random variates with separate means, generate a Poisson random variate whose mean is the sum of those means (see (Devroye 1986)(20), p. 501). For example, to generate a sum of 1000 independent Poisson random variates with a mean of 1/1000000, simply generate PoissonInt(1, 1000) (because 1/1000000 * 1000 = 1/1000).

Hypergeometric Distribution

The following method generates a random integer that follows a hypergeometric distribution. When a given number of items are drawn at random without replacement from a collection of items each labeled either 1 or 0, the random integer expresses the number of items drawn this way that are labeled 1. In the method below, trials is the number of items drawn at random, ones is the number of items labeled 1 in the set, and count is the number of items labeled 1 or 0 in that set.

METHOD Hypergeometric(trials, ones, count)
    if ones < 0 or count < 0 or trials < 0 or
       ones > count or trials > count
      return error
    end
    if ones == 0: return 0
    successes = 0
    i = 0
    currentCount = count
    currentOnes = ones
    while i < trials and currentOnes > 0
      if ZeroOrOne(currentOnes, currentCount) == 1
        currentOnes = currentOnes - 1
        successes = successes + 1
      end
      currentCount = currentCount - 1
      i = i + 1
    end
    return successes
END METHOD

Example: In a 52-card deck of Anglo-American playing cards, 12 of the cards are face cards (jacks, queens, or kings). After the deck is shuffled and seven cards are drawn, the number of face cards drawn this way follows a hypergeometric distribution where trials is 7, ones is 12, and count is 52.

Random Integers with a Given Positive Sum

The following pseudocode shows how to generate n random integers with a given positive sum, in random order (specifically, a uniformly chosen random partition of that sum into n parts with repetition and in random order). (The algorithm for this was presented in (Smith and Tromble 2004)(57).) In the pseudocode below—

 

METHOD PositiveIntegersWithSum(n, total)
    if n <= 0 or total <=0: return error
    ls = [0]
    ret = NewList()
    while size(ls) < n
      c = RNDINTEXCRANGE(1, total)
      found = false
      for j in 1...size(ls)
        if ls[j] == c
          found = true
          break
        end
      end
      if found == false: AddItem(ls, c)
    end
    Sort(ls)
    AddItem(ls, total)
    for i in 1...size(ls): AddItem(ret,
        ls[i] - ls[i - 1])
    return ret
END METHOD

METHOD IntegersWithSum(n, total)
  if n <= 0 or total <=0: return error
  ret = PositiveIntegersWithSum(n, total + n)
  for i in 0...size(ret): ret[i] = ret[i] - 1
  return ret
END METHOD

Notes:

  1. To generate N random integers with a given positive average avg (an integer), in random order, generate IntegersWithSum(N, N * avg).
  2. To generate N random integers min or greater and with a given positive sum sum (an integer), in random order, generate IntegersWithSum(N, sum - N * min), then add min to each number generated this way. The Python sample code implements an efficient way to generate such integers if each one can't exceed a given maximum; the algorithm is thanks to a Stack Overflow answer (questions/61393463) by John McClane.
  3. To generate N rational numbers that sum to tx/ty, call IntegersWithSum(N, tx * ty * x) or PositiveIntegersWithSum(N, tx * ty * x) as appropriate (where x is the desired accuracy as an integer, such as pow(2, 32) or pow(2, 53), so that the results are accurate to 1/x or less), then for each number c in the result, convert it to MakeRatio(c, tx * ty * x) * MakeRatio(tx, ty).

Multinomial Distribution

The multinomial distribution uses two parameters: trials and weights. It models the number of times each of several mutually exclusive events happens among a given number of trials (trials), where each event can have a separate probability of happening (given as a list of weights).

A trivial implementation is to fill a list with as many zeros as weights, then for each trial, choose index = WeightedChoice(weights) and add 1 to the item in the list at the chosen index. The resulting list follows a multinomial distribution. The pseudocode below shows an optimization suggested in (Durfee et al., 2018, Corollary 45)(58), but assumes all weights are integers.

METHOD Multinomial(trials, weights)
    if trials < 0: return error
    // create a list of successes
    list = []
    ratios = []
    sum=Sum(weights)
    for i in 0...size(weights): AddItem(ratios,
           MakeRatio(weights[i], sum))
    end
    for i in 0...size(weights)
        r=ratios[i]
        b=BinomialInt(t,r[0],r[1])
        AddItem(list, b)
        trials=trials-b
        if trials>0: for j in range(i+1,
            len(weights)): ratios[j]=ratios[j]/(1-r)
    end
    return list
END METHOD

Randomization with Real Numbers

This section describes randomization methods that use random real numbers, not just random integers. These include random rational numbers, fixed-point numbers, and floating-point numbers.

But whenever possible, applications should work with random integers, rather than other random real numbers. This is because:

The methods in this section should not be used to sample at random for information security purposes, even if a secure "source of random numbers" is available. See "Security Considerations" in the appendix.

Uniform Random Real Numbers

This section defines the following methods that generate independent uniform random real numbers:

The sections that follow show how these methods can be implemented for fixed-point, rational, and floating-point numbers. An additional format for random real numbers is the partially-sampled random number.

For Fixed-Point Number Formats

For fixed-point number formats representing multiples of 1/n, these methods are trivial. The following implementations return integers that represent fixed-point numbers. In each method below, fpa and fpb are the bounds of the fixed-point number generated and are integers that represent fixed-point numbers (such that fpa = a * n and fpb = b * n). For example, if n is 100, to generate a number in [6.35, 9.96], generate RNDRANGE(6.35, 9.96) or RNDINTRANGE(635, 996).

For Rational Number Formats

For rational number formats with a fixed denominator, the RNDRANGE family can be implemented in the numerator as given in the previous section for fixed-point formats.

For Floating-Point Number Formats

For floating-point number formats representing numbers of the form FPSign * s * FPRADIXe (60), the following pseudocode implements RNDRANGE(lo, hi). In the pseudocode:

See also (Downey 2007)(61) and the Rademacher Floating-Point Library.

METHOD RNDRANGE(lo, hi)
  losgn = FPSign(lo)
  hisgn = FPSign(hi)
  loexp = FPExponent(lo)
  hiexp = FPExponent(hi)
  losig = FPSignificand(lo)
  hisig = FPSignificand(hi)
  if lo > hi: return error
  if losgn == 1 and hisgn == -1: return error
  if losgn == -1 and hisgn == 1
    // Straddles negative and positive ranges
    // NOTE: Changes negative zero to positive
    mabs = max(abs(lo),abs(hi))
    while true // until a value is returned
       ret=RNDRANGE(0, mabs)
       neg=RNDINT(1)
       if neg==0: ret=-ret
       if ret>=lo and ret<=hi: return ret
    end
  end
  if lo == hi: return lo
  if losgn == -1
    // Negative range
    return -RNDRANGE(abs(lo), abs(hi))
  end
  // Positive range
  expdiff=hiexp-loexp
  if loexp==hiexp
    // Exponents are the same
    // NOTE: Automatically handles
    // subnormals
    s=RNDINTRANGE(losig, hisig)
    return s*1.0*pow(FPRADIX, loexp)
  end
  while true // until a value is returned
    ex=hiexp
    while ex>MINEXP
      v=RNDINTEXC(FPRADIX)
      if v==0: ex=ex-1
      else: break
    end
    s=0
    if ex==MINEXP
      // Has FPPRECISION or fewer digits
      // and so can be normal or subnormal
      s=RNDINTEXC(pow(FPRADIX,FPPRECISION))
    else if FPRADIX != 2
      // Has FPPRECISION digits
      s=RNDINTEXCRANGE(
        pow(FPRADIX,FPPRECISION-1),
        pow(FPRADIX,FPPRECISION))
    else
      // Has FPPRECISION digits (bits), the highest
      // of which is always 1 because it's the
      // only nonzero bit
      sm=pow(FPRADIX,FPPRECISION-1)
      s=RNDINTEXC(sm)+sm
    end
    ret=s*1.0*pow(FPRADIX, ex)
    if ret>=lo and ret<=hi: return ret
  end
END METHOD

The other members of the RNDRANGE family can be derived from RNDRANGE as follows:

Note: In many software libraries, a real number in a range is chosen uniformly at random by dividing or multiplying a random integer by a constant. For example, the equivalent of RNDRANGEMaxExc(0, 1) is often implemented like RNDINTEXC(X) * (1.0/X) or RNDINTEXC(X) / X, where X varies based on the software library.(62) The disadvantage here is that doing so does not necessarily cover all numbers a floating-point format can represent in the range (Goualard 2020)(63). As another example, RNDRANGEMaxExc(a, b) is often implemented like a + Math.random() * (b - a), where Math.random() returns a number in the interval [0, 1); however, this not only has the same disadvantage, but has many other issues where floating-point numbers are involved (Monahan 1985)(64).

Monte Carlo Sampling: Expected Values, Integration, and Optimization

Requires random real numbers.

Randomization is the core of Monte Carlo sampling. There are three main uses of Monte Carlo sampling: estimation, integration, and optimization.

  1. Estimating expected values. Monte Carlo sampling can help estimate the expected value (mean or average) of a sampling distribution, or of a function of values sampled from that distribution. This function is called EFUNC(x) in this section, where x is one of the values in the sample. Algorithms to estimate expected values are called estimators; one such estimator is to sample n values, apply EFUNC(x) to each sampled value x, add the values, and divide by n (see note below). However, this estimator won't work for all distributions, since they may have an infinite expected value, and it also doesn't allow controlling for the estimate's error. This estimator is called:

    • The nth sample raw moment (a raw moment is a mean of nth powers) if EFUNC(x) is pow(x, n).
    • The sample mean, if EFUNC(x) is x or pow(x, 1).
    • The nth sample central moment (a central moment is a moment about the mean) if EFUNC(x) is pow(x-m, n), where m is the sample mean.
    • The (biased) sample variance, the second sample central moment.
    • The sample probability, if EFUNC(x) is 1 if some condition is met or 0 otherwise.

    There are two sources of error in Monte Carlo estimators: bias and variance. An estimator is unbiased (has bias 0) if multiple independent samples of any fixed size k (from the same distribution) are expected to average to the true expected value (Halmos 1946)(65). For example, any nth sample raw moment is an unbiased estimator provided k >= n, but the sample variance is not unbiased, and neither is one for any sample central moment other than the first (Halmos 1946)(65). Unlike with bias, there are ways to reduce variance, which are outside the scope of this document. An estimator's mean squared error equals variance plus square of bias.

    For Monte Carlo estimators with accuracy guarantees, see "Randomized Estimation Algorithms".

  2. Monte Carlo integration. This is usually a special case of Monte Carlo estimation that approximates a multidimensional integral over a sampling domain; here, EFUNC(z) is the function to find the integral of, where z is a randomly chosen point in the sampling domain (such as 1 if the point is in the true volume and 0 if not).

  3. Stochastic optimization. This uses randomness to help find the minimum or maximum value of a function with one or more variables; examples include simulated annealing and simultaneous perturbation stochastic approximation (see also (Spall 1998)(66)).

Note: Assuming the true population has a mean and variance, the sample mean is an unbiased estimator of the mean, but the sample variance is generally a biased estimator of variance for any sample smaller than the whole population. The following pseudocode returns a two-item list containing the sample mean and an unbiased estimator of the variance, in that order, of a list of real numbers (list), using the Welford method presented by J. D. Cook. The square root of the variance calculated here is what many APIs call a standard deviation (e.g. Python's statistics.stdev). For the usual (biased) sample variance, replace (size(list)-1) with size(list) in the pseudocode shown next. The pseudocode follows: if size(list)==0: return [0, 0]; if size(list)==1: return [list[0], 0]; xm=list[0]; xs=0; i=1; while i < size(list); c = list[i]; i = i + 1; cxm = (c - xm); xm = xm + cxm *1.0/ i; xs = xs + cxm * (c - xm); end; return [xm, xs*1.0/(size(list)-1)].

Low-Discrepancy Sequences

Requires random real numbers.

A low-discrepancy sequence (or quasirandom sequence) is a sequence of numbers that behave like uniformly distributed numbers in the interval [0, 1], but are dependent on each other, in that they are less likely to form "clumps" than if they were independent. The following are examples:

The points of a low-discrepancy sequence can be "scrambled" with the help of a pseudorandom number generator (or another device or program that simulates a "source of random numbers"). In Monte Carlo sampling, low-discrepancy sequences are often used to achieve more efficient "random" sampling, but in general, they can be safely used this way only if none of their points is skipped (Owen 2020)(69).

Notes on Randomization Involving Real Numbers

Requires random real numbers.

Random Walks: Additional Examples

Transformations: Additional Examples

  1. Bates distribution: Find the mean of n uniform random variates in a given range (such as by RNDRANGE(minimum, maximum)) (strategy 8, mean; see the appendix).
  2. A random point (x, y) can be transformed (strategy 9, geometric transformation) to derive a point with correlated random coordinates (old x, new x) as follows (see (Saucier 2000)(72), sec. 3.8): [x, y*sqrt(1 - rho * rho) + rho * x], where x and y are independent numbers chosen at random in the same way, and rho is a correlation coefficient in the interval [-1, 1] (if rho is 0, x and y are uncorrelated).
  3. It is reasonable to talk about sampling the sum or mean of N random variates, where N has a fractional part. In this case, ceil(N) random variates are generated and the last variate is multiplied by that fractional part. For example, to sample the sum of 2.5 random variates, generate three random variates, multiply the last by 0.5 (the fractional part of 2.5), then add together all three variates.
  4. A hypoexponential distribution models the sum of n random variates that follow an exponential distribution and each have a separate rate parameter (see "Exponential Distribution").
  5. The maximal coupling method mentioned by P. Jacob generates correlated random variates from two distributions, P and Q, with known probability density functions or PDFs (PPDF and QPDF, respectively); this works only if the area under each PDF is 1: Sample a number x at random from distribution P, and if RNDRANGE(0, PPDF(x)) < QPDF(x), return [x, x]. Otherwise, sample a number y at random from distribution Q until PPDF(y) < RNDRANGE(0, QPDF(y)), then return [x, y].

Sampling from a Distribution of Data Points

Requires random real numbers.

Generating random data points based on how a list of data points is distributed involves the field of machine learning: fit a data model to the data points, then predict a new data point based on that model, with randomness added to the mix. Three kinds of data models, described below, serve this purpose. (How fitting works is outside the scope of this page.)

  1. Density estimation models. Density estimation models seek to describe the distribution of data points in a given data set, where areas with more points are more likely to be sampled.(73) The following are examples.

    • Histograms are sets of one or more non-overlapping bins, which are generally of equal size. Histograms are mixtures, where each bin's weight is the number of data points in that bin. After a bin is randomly chosen, a random data point that could fit in that bin is generated (that point need not be an existing data point).
    • Gaussian mixture models are also mixtures, in this case, mixtures of one or more Gaussian (normal) distributions.
    • Kernel distributions are mixtures of sampling distributions, one for each data point. Estimating a kernel distribution is called kernel density estimation. To sample from a kernel distribution:
      1. Choose one of the numbers or points in the list uniformly at random with replacement.
      2. Add a randomized "jitter" to the chosen number or point; for example, add a separately generated Normal(0, sigma) to the chosen number or each component of the chosen point, where sigma is the bandwidth(74).
    • Stochastic interpolation is described in (Saucier 2000)(72), sec. 5.3.4.
    • Fitting a known distribution (such as the normal distribution), with unknown parameters, to data can be done by maximum likelihood estimation, among other ways.
  2. Regression models. A regression model is a model that summarizes data as a formula and an error term. If an application has data in the form of inputs and outputs (e.g., monthly sales figures) and wants to sample a random but plausible output given a known input point (e.g., sales for a future month), then the application can fit and sample a regression model for that data. For example, a linear regression model, which simulates the value of y given known inputs a and b, can be sampled as follows: y = c1 * a + c2 * b + c3 + Normal(0, sqrt(mse)), where mse is the mean squared error and c1, c2, and c3 are the coefficients of the model. (Here, Normal(0, sqrt(mse)) is the error term.)

  3. Generative and discriminative models. These are machine learning models that take random variates as input and generate outputs (such as images or sounds) that are similar to examples they have already seen. One example, generative adversarial networks, employs both kinds of model.

Notes:

  1. Usually, more than one kind of data model and/or machine learning model is a possible choice to fit to a given data set (e.g., multiple kinds of density estimation models, regression models, parametric distributions, and/or decision trees). If several kinds of model are fitting choices, then the kind showing the best predictive accuracy for the data set (e.g., information criterion, precision, recall) should be chosen.
  2. If the existing data points each belong in one of several categories, choosing a random category could be done by choosing a number at random with probability proportional to the number of data points in each category (see "Weighted Choice").
  3. If the existing data points each belong in one of several categories, choosing a random data point and its category could be done—
    1. by choosing a random data point based on all the existing data points, then finding its category (e.g., via machine learning models known as classification models), or
    2. by choosing a random category as given above, then by choosing a random data point based only on the existing data points of that category.

Sampling from an Arbitrary Distribution

Requires random real numbers.

Many probability distributions can be defined in terms of any of the following:

In this section, a PDF-like function is the PDF, the PMF, or either function times a (possibly unknown) positive constant.

The following sections show different ways to generate random variates based on a distribution, depending on what is known about that distribution.

Note: Lists of CDFs, PDF-like functions, or quantile functions are outside the scope of this page.

Sampling for Discrete Distributions

If the distribution is discrete, numbers that closely follow it can be sampled by choosing points that cover all or almost all of the distribution, finding their weights or cumulative weights, and choosing a random point based on those weights.

If—

the following method samples exactly from that distribution:

METHOD SampleDiscrete(mini, maxi)
      // Setup
      ratios=[]
      for i in mini..maxi: AddItem(ratios, PDF(i))
      ratios=NormalizeRatios(ratios)
      // Sampling
      return mini + WeightedChoice(ratios)
END METHOD

If—

the following method samples exactly from that distribution:

METHOD SampleDiscreteCDF(mini, maxi)
  // Setup
  ratios=[MakeRatio(0,1)]
  for i in mini..maxi: AddItem(ratios, CDF(i))
  ratios=NormalizeRatios(ratios)
  // Sampling
  value=ratios[size(ratios) - 1]
  for i in 0...size(ratios) - 1
         if ratios[i] < ratios[i+1] and
             ratios[i]>=value: return mini + i
  end
  return mini
END METHOD

In other cases, the discrete distribution can still be approximately sampled. The following cases will lead to an approximate sampler unless the values of the CDF or PDF-like function cover all the distribution and are calculated exactly (without error).

METHOD FPRatio(fp)
  expo=FPExponent(fp)
  sig=FPSignificand(fp)
  radix=FPRadix(fp)
  if expo>=0: return MakeRatio(sig * pow(radix, expo), 1)
  return MakeRatio(sig, pow(radix, abs(expo)))
END METHOD

Note: In addition, some discrete (and continuous) distributions are known only through an oracle (or "black box") that produces random variates that follow that distribution. Algorithms can use this oracle to produce new random variates that follow a different distribution. When the mean (average) of the new variates equals the oracle's, the algorithm is called an unbiased estimator (because in a way, it "estimates" some aspect of the oracle's numbers). One example is the Bernoulli factory (see my article "Bernoulli Factory Algorithms"), which takes flips of a biased "coin" (the oracle) and produces the flip of a new "coin" biased a different way. Another example is the "Bernoulli race" described in Weighted Choice.

Inverse Transform Sampling

Inverse transform sampling (or simply inversion) is the most generic way to sample a number from a probability distribution.

If the distribution has a known quantile function, generate a uniform random variate in (0, 1) if that number wasn't already pregenerated, and take the quantile of that number. However:

The following method samples from a distribution via inversion, with an accuracy of 1/BASEprecision ((Devroye and Gravel 2020)(12), but extended for any base; see also (Bringmann and Friedrich 2013, Appendix A)(54)). In the method, ICDF(u, ubits, prec) returns a two-item list containing upper and lower bounds, respectively, of a number that is within 1/BASEprec of the true quantile of u/BASEubits, and BASE is the digit base (e.g. 2 for binary or 10 for decimal).

METHOD Inversion(precision)
   u=0
   ubits=0
   threshold=MakeRatio(1,pow(BASE, precision))*2
   incr=8
   while true // until a value is returned
      incr=8
      if ubits==0: incr=precision
      // NOTE: If a uniform number (`n`) is already pregenerated,
      // use the following instead:
      // u = rem(floor(n*pow(BASE, ubits+incr)), pow(BASE, incr))
      u=u*pow(BASE,incr)+RNDINTEXC(pow(BASE,incr))
      ubits=ubits+incr
      // Get upper and lower bound
      bounds=ICDF(u,ubits,precision)
      if lower>upper: return error
      diff=bounds[1]-bounds[0]
      if diff<=threshold: return bounds[1]+diff/2
   end
end

Some applications need to convert a pregenerated number in [0, 1] (usually a number sampled from a uniform distribution), called u01 below, to a non-uniform distribution via quantiles. Notable cases include copula methods, order statistics, and Monte Carlo methods involving low-discrepancy sequences. The following way to compute quantiles is exact in theory:

In addition, the following methods approximate the quantile if the application can trade accuracy for speed:

 

METHOD ICDFFromContPDF(u01, mini, maxi, step)
  pieces=[]
  areas=[]
  // Setup
  lastvalue=i
  lastweight=PDF(i)
  cumuarea=0
  i = mini+step; while i <= maxi
     weight=i; value=PDF(i)
     cumuarea=cumuarea+abs((weight + lastweight) * 0.5 *
            (value - lastvalue))
     AddItem(pieces,[lastweight,weight,lastvalue,value])
     AddItem(areas,cumuarea)
     lastweight=weight;lastvalue=value
     if i==maxi: break
     i = min(i + step, maxi)
  end
  for i in 0...size(areas): areas[i]=areas[i]/cumuarea
  // Sampling
  prevarea=0
  for i in 0...size(areas)
     cu=areas[i]
     if u01<=cu
         p=pieces[i]; u01=(u01-prevarea)/(cu-prevarea)
         s=p[0]; t=p[1]; v=u01
         if s!=t: v=(s-sqrt(t*t*u01-s*s*u01+s*s))/(s-t)
         return p[2]+(p[3]-p[2])*v
     end
     prevarea=cu
  end
  return error
END METHOD

Notes:

  1. If only percentiles of data (such as the median or 50th percentile, the minimum or 0th percentile, or the maximum or 100th percentile) are available, the quantile function can be approximated via those percentiles. The Nth percentile corresponds to the quantile for N/100.0. Missing values for the quantile function can then be filled in by interpolation (such as spline fitting). If the raw data points are available, see "Sampling from a Distribution of Data Points" instead.
  2. Taking the kth smallest of n random variates distributed the same way is the same as taking the kth smallest of n uniform random variates in the interval [0, 1) (also known as the kth order statistic; e.g., BetaDist(k, n+1-k)) and finding its quantile (Devroye 2006)(82); (Devroye 1986, p. 30)(10).

Rejection Sampling with a PDF-Like Function

If the distribution has a known PDF-like function (PDF), and that function can be more easily sampled by another distribution with its own PDF-like function (PDF2) that "dominates" PDF in the sense that PDF2(x) >= PDF(x) at every valid x, then generate random variates with the latter distribution until a variate (call it n) that satisfies r <= PDF(n), where r = RNDRANGEMaxExc(0, PDF2(n)), is generated this way (that is, sample points in PDF2 until a point falls within PDF).

A variant of rejection sampling is the squeeze principle, in which a third PDF-like function (PDF3) is chosen that is "dominated" by the first one (PDF) and easier to sample than PDF. Here, a number is accepted if r <= PDF3(n) or r <= PDF(n) , where r = RNDRANGEMaxExc(0, PDF2(n)) (Devroye 1986, p. 53)(20).

See also (von Neumann 1951)(55); (Devroye 1986)(20), pp. 41-43; "Rejection Sampling"; and "Generating Pseudorandom Numbers".

Examples:

  1. To sample a random variate in the interval [low, high) from a PDF-like function with a positive maximum value no greater than peak at that interval, generate x = RNDRANGEMaxExc(low, high) and y = RNDRANGEMaxExc(0, peak) until y < PDF(x), then take the last x generated this way. (See also Saucier 2000, pp. 6-7.) If the distribution is discrete, generate x with x = RNDINTEXCRANGE(low, high) instead.
  2. A PDF-like function for a custom distribution, PDF, is exp(-abs(x*x*x)), and the exponential distribution's, PDF2, is exp(-x). The exponential PDF-like function PDF2 "dominates" PDF (at every x 0 or greater) if we multiply it by 1.5, so that PDF2 is now 1.5 * exp(-x). Now we can generate numbers from our custom distribution by sampling exponential points until a point falls within PDF. This is done by generating n = Expo(1) until PDF(n) >= RNDRANGEMaxExc(0, PDF2(n)).
  3. The normal distribution's upside-down bell curve has the PDF-like function 1-exp(-(x*x)), and the highest point for this function is peak = max(1-exp(-(low*low)), 1-exp(-(high*high))). Sampling this distribution then uses the algorithm in example 1.

Note: In the Python sample code, moore.py and numbers_from_dist sample from a distribution via rejection sampling (Devroye and Gravel 2020)(12), (Sainudiin and York 2013)(83).

Alternating Series

If a PDF-like function for the target distribution is not known exactly, but can be approximated from above and below by two series expansions that converge to that function as more terms are added, the alternating series method can be used. This still requires a "dominating" PDF-like function (PDF2(x)) to serve as the "easy-to-sample" distribution. Call the series expansions UPDF(x, n) and LPDF(x, n), respectively, where n is the number of terms in the series to add. To sample the distribution using this method (Devroye 2006)(82): (1) Sample from the "dominating" distribution, and let x be the sampled number; (2) set n to 0; (3) accept x if r <= LPDF(x, n), or go to step 1 if r >= UPDF(x, n), or repeat this step with n increased by 1 if neither is the case, where r = RNDRANGEMaxExc(0, PDF2(n)).

Markov-Chain Monte Carlo

Markov-chain Monte Carlo (MCMC) is a family of algorithms for sampling many random variates from a probability distribution by building a Markov chain of random values that build on each other until they converge to the given distribution. In general, however, a given chain's random values will have a statistical dependence on each other, and it takes an unknown time for the chain to converge (which is why techniques such as "thinning" — keeping only every Nth sample — or "burn-in" — skipping iterations before sampling — are often employed). MCMC can also estimate the distribution's sampling domain for other samplers, such as rejection sampling (above).

MCMC algorithms(84) include Metropolis–Hastings, slice sampling, and Gibbs sampling (see also the Python sample code). The latter is special in that it uses not a PDF-like function, but two or more distributions, each of which uses a number sampled at random from the previous distribution (conditional distributions), that converge to a joint distribution.

Example: In one Gibbs sampler, an initial value for y is chosen, then multiple x, y pairs of random variates are generated, where x = BetaDist(y, 5) then y = Poisson(x * 10).

Piecewise Linear Distribution

Requires random real numbers.

A piecewise linear distribution describes a continuous distribution with weights at known points and other weights determined by linear interpolation (smoothing). The PiecewiseLinear method (in the pseudocode below) takes two lists as follows (see also (Kscischang 2019)(85)):

 

METHOD PiecewiseLinear(values, weights)
  if size(values)!=size(weights) or size(values)==0: return error
  if size(values)==1: return values[0]
  areas=[]
  for i in 1...size(values)
     area=abs((weights[i] + weights[i-1]) *
         (values[i] - values[i-1]) / 2) // NOTE: Not rounded
     AddItem(areas,area)
  end
  // NOTE: If values and weights are rational
  // numbers, use `areas=NormalizeRatios(areas)` instead
  // of finding `areas` as given below.
  ratios=[]
  for w in areas: AddItem(ratios, FPRatio(w))
  areas=NormalizeRatios(ratios)
  index=WeightedChoice(areas)
  w=values[index+1]-values[index]
  if w==0: return values[index]
  m=(weights[index+1]-weights[index])/w
  h2=(weights[index+1]+weights[index])
  ww=w/2.0; hh=h2/2.0
  x=RNDRANGEMaxExc(-ww, ww)
  if RNDRANGEMaxExc(-hh, hh)>x*m: x=-x
  return values[index]+x+ww
END METHOD

Note: The Python sample code contains a variant to the method above for returning more than one random variate in one call.

Example: Assume values is the following: [0, 1, 2, 2.5, 3], and weights is the following: [0.2, 0.8, 0.5, 0.3, 0.1]. The weight for 2 is 0.5, and that for 2.5 is 0.3. Since 2 has a higher weight than 2.5, numbers near 2 are more likely to be chosen than numbers near 2.5 with the PiecewiseLinear method.

Specific Distributions

Methods to sample additional distributions are given in a separate page. They cover the normal, gamma, beta, von Mises, stable, and multivariate normal distributions as well as copulas. Note, however, that most of the methods won't sample the given distribution in a manner that minimizes approximation error, but they may still be useful if the application is willing to trade accuracy for speed.

Index of Non-Uniform Distributions

Many distributions here require random real numbers.

A † symbol next to a distribution means that a sample from the distribution can be shifted by a location parameter (mu) then scaled by a scale parameter greater than 0 (sigma). Example: num * sigma + mu.

A ⬦ symbol next to a distribution means the sample can be scaled to any range, which is given with the minimum and maximum values mini and maxi. Example: mini + (maxi - mini) * num.

For further examples and distributions, see (Devroye 1996)(86) and (Crooks 2019)(87).

Most commonly used:

Miscellaneous:

Geometric Sampling

Requires random real numbers.

This section contains ways to choose independent uniform random points in or on geometric shapes.

Random Points Inside a Simplex

The following pseudocode generates a random point inside an n-dimensional simplex (simplest convex figure, such as a line segment, triangle, or tetrahedron). It takes one parameter, points, a list consisting of the n plus one vertices of the simplex, all of a single dimension n or greater. The special case of 3 points came from Osada et al. (2002)(100). See also Grimme (2015)(101), which shows MATLAB code for generating a random point uniformly inside a simplex just described, but in a different way.

METHOD VecAddProd(a, b, c)
  for j in 0...size(a): a[j]=a[j]+b[j]*c
END METHOD

METHOD RandomPointInSimplex(points):
   ret=NewList()
   if size(points) > size(points[0])+1: return error
   if size(points)==1 // Return a copy of the point
     for i in 0...size(points[0]): AddItem(ret,points[0][i])
     return ret
   end
   if size(points)==3
      // Equivalent to sqrt(RNDRANGE(0,1))
     rs=max(RNDRANGE(0,1), RNDRANGE(0,1))
     r2=RNDRANGE(0,1)
     ret=[0,0,0]
     VecAddProd(ret,points[0],1.0-rs)
     VecAddProd(ret,points[1],(1.0-r2)*rs)
     VecAddProd(ret,points[2],r2*rs)
     return ret
   end
   gammas=NewList()
   // Sample from the simplex
   for i in 0...size(points): AddItem(gammas, Expo(1))
   tsum=Sum(gammas)
   for i in 0...size(gammas): gammas[i] = gammas[i] / tsum
   gammas[size(gammas)-1]=0 // To omit last gamma in sum
   tot = 1.0 - Sum(gammas)
   // Build the final point
   for i in 0...size(points[0]): AddItem(ret, points[0][i]*tot)
   for i in 1...size(points): VecAddProd(
      ret, points[i], gammas[i-1])
   return ret
END METHOD

Random Points on the Surface of a Hypersphere

The following pseudocode shows how to generate a random N-dimensional point on the surface of an N-dimensional hypersphere, centered at the origin, of radius radius (if radius is 1, the result can also serve as a unit vector in N-dimensional space). Here, Norm is given in the appendix. See also (Weisstein)(102).

METHOD RandomPointInHypersphere(dims, radius)
  x=0
  while x==0
    ret=[]
    for i in 0...dims: AddItem(ret, Normal(0, 1))
    x=Norm(ret)
  end
  invnorm=radius/x
  for i in 0...dims: ret[i]=ret[i]*invnorm
  return ret
END METHOD

Note: The Python sample code contains an optimized method for points on the edge of a circle.

Example: To generate a random point on the surface of a cylinder running along the Z axis, generate random X and Y coordinates on the edge of a circle (2-dimensional hypersphere) and generate a random Z coordinate by RNDRANGE(mn, mx), where mn and mx are the highest and lowest Z coordinates possible.

Random Points Inside a Box, Ball, Shell, or Cone

To generate a random point on or inside—

Example: To generate a random point inside a cylinder running along the Z axis, generate random X and Y coordinates inside a disc (2-dimensional ball) and generate a random Z coordinate by RNDRANGE(mn, mx), where mn and mx are the highest and lowest Z coordinates possible.

Notes:

  1. The Python sample code contains a method for generating a random point on the surface of an ellipsoid modeling the Earth.
  2. Sampling a half-ball, half-shell, or half-hypersphere can be done by sampling a full ball, shell, or hypersphere and replacing one of the dimensions of the result with its absolute value.

Random Latitude and Longitude

To generate a random point on the surface of a sphere in the form of a latitude and longitude (in radians with west and south coordinates negative)(106)

Acknowledgments

I acknowledge the commenters to the CodeProject version of this page, including George Swan, who referred me to the reservoir sampling method.

I also acknowledge Christoph Conrads, who gave suggestions in parts of this article.

Other Documents

The following are some additional articles I have written on the topic of randomization and pseudorandom variate generation. All of them are open-source.

Notes

Appendix

 

Sources of Random Numbers

All the randomization methods presented on this page assume that we have an endless source of numbers such that—

That is, the methods assume we have a "source of (uniform) random numbers". (Thus, none of these methods generate random numbers themselves, strictly speaking, but rather, they assume we have a source of them already.)

However, this is an ideal assumption which is hard if not impossible to achieve in practice.

Indeed, most applications make use of pseudorandom number generators (PRNGs), which are algorithms that produce random-behaving numbers, that is, numbers that simulate the ideal "source of random numbers" mentioned above. As a result, the performance and quality of the methods on this page will depend in practice on the quality of the PRNG (or other generator of random-behaving numbers) even if they don't in theory.

Note that the "source of random numbers" can be simulated by a wide range of devices and programs, including PRNGs, so-called "true random number generators", and application programming interfaces (APIs) that provide uniform random-behaving numbers to applications. An application ought to choose devices or programs that simulate the "source of random numbers" well enough for its purposes, including in terms of their statistical quality, "unguessability", or both. However, it is outside this document's scope to give further advice on this choice.

The randomization methods in this document are deterministic (that is, they produce the same values given the same state and input), regardless of what simulates the "source of random numbers" (such as a PRNG or a "true random number generator"). The exceptions are as follows:

Norm Calculation

The following method calculates the norm of a vector (list of numbers), more specifically the ℓ2 norm of that vector.

METHOD Norm(vec)
  ret=0
  rc=0
  for i in 0...size(vec)
    rc=vec[i]*vec[i]-rc
    rt=rc+ret
    rc=(rt-ret)-rc
    ret=rt
  end
  return sqrt(ret)
END METHOD

There are other kinds of norms besides the ℓ2 norm. More generally, the ℓp norm, where p is 1 or greater or is ∞ (infinity), is the pth root of the sum of pth powers of a vector's components' absolute values (or, if p is ∞, the highest absolute value among those components). An ℓp ball or sphere of a given radius is a ball or sphere that is bounded by or traces, respectively, all points with an ℓp norm equal to that radius. (An ℓ ball or sphere is box-shaped.)

Implementation Considerations

  1. Shell scripts and Microsoft Windows batch files are designed for running other programs, rather than general-purpose programming. However, batch files and bash (a shell script interpreter) might support a variable which returns a uniformly distributed "random" integer in the interval [0, 32767] (called %RANDOM% or $RANDOM, respectively); neither variable is designed for information security. Whenever possible, the methods in this document should not be implemented in shell scripts or batch files, especially if information security is a goal.
  2. Query languages such as SQL have no procedural elements such as loops and branches. Moreover, standard SQL has no way to choose a number at random, but popular SQL dialects often do — with idiosyncratic behavior — and describing differences between SQL dialects is outside the scope of this document. Whenever possible, the methods in this document should not be implemented in SQL, especially if information security is a goal.
  3. Stateless PRNGs. Most designs of pseudorandom number generators (PRNGs) in common use maintain an internal state and update that state each time a number is sampled at random. But for stateless PRNG designs (including so-called "splittable" PRNGs), RNDINT(), NEXTRAND(), and other random sampling methods in this document may have to be adjusted accordingly (usually by adding an additional parameter).
  4. Multithreading. Multithreading can serve as a fast way to generate multiple random variates at once; it is not reflected in the pseudocode given in this page. In general, this involves dividing a block of memory into chunks, assigning each chunk to a thread, giving each thread its own instance of a pseudorandom number generator (or another program that simulates a "source of random numbers"), and letting each thread fill its assigned chunk with random variates. For an example, see "Multithreaded Generation".
  5. Fixed amount of "randomness". Given a k-bit unsigned integer n (which lies in the interval [0, 2k) and is chosen uniformly at random), values that approximate a probability distribution (e.g., Poisson, Normal) can be generated with the integer n either by finding the quantile for (n+1)/2k+2 or by treating n as a seed for a local PRNG and using the PRNG to generate a sample from that distribution. An application should use this suggestion only if it wants to ensure a fixed amount of "randomness" per sampled outcome is ultimately drawn, because the sampling method can return one of only 2k different outcomes this way. (In general, n can't be chosen uniformly at random with a fixed number of randomly chosen bits, unless the number of different outcomes for n is a power of 2.)

Security Considerations

If an application samples at random for information security purposes, such as to generate passwords or encryption keys at random, the following applies:

  1. "Cryptographic generators". The application has to use a device or program that generates random-behaving numbers that are hard to guess for information security purposes (a so-called "cryptographic generator"). Choosing such a device or program is outside the scope of this document.
  2. Timing attacks. Certain security attacks have exploited timing and other differences to recover cleartext, encryption keys, or other sensitive data. Thus, so-called "constant-time" security algorithms have been developed. (In general, "constant-time" algorithms are designed to have no timing differences, including memory access patterns, that reveal anything about any secret inputs, such as keys, passwords, or "seeds" for pseudorandom number generators). But even if an algorithm has variable running time (e.g., rejection sampling), it may or may not have security-relevant timing differences, especially if it does not reuse secrets.
  3. Security algorithms out of scope. Security algorithms that take random secrets to generate random security parameters, such as encryption keys, public/private key pairs, elliptic curves, or points on an elliptic curve, are outside this document's scope.
  4. Floating-point numbers. Numbers chosen at random for security purposes are almost always integers (and, in very rare cases, fixed-point numbers). Even in the few security applications where those numbers are floating-point numbers (differential privacy and lattice-based cryptography), there are ways to avoid such floating-point numbers(107).

License

Any copyright to this page is released to the Public Domain. In case this is not possible, this page is also licensed under Creative Commons Zero.