Mathematics Of Mapping Uniform Distributions To Non-Linear Curves

Defining Uniform Distributions

A uniform distribution refers to a probability distribution in which all outcomes have an equal likelihood of occurring. Some key properties of uniform distributions include:

  • The probability density function (PDF) is constant over the distribution’s support range.
  • All values within the distribution’s bounds have equal probability.
  • The distribution has maximum entropy out of all possible distributions with support on the same interval.

Some common examples of uniform distributions include:

  • The continuous uniform distribution, with probability density f(x) = 1/(b−a) over the interval [a, b].
  • The discrete uniform distribution over the set {a, a+1, …, b} with each value having probability 1/(b−a+1).
  • Uniform distributions over more complex domains like spheres, polygons, or other geometric shapes.

Non-Linear Mapping Functions

A non-linear function is one whose graph does not form a straight line. Some key properties include:

  • The function’s first derivative is not constant.
  • Scaling the input does not produce a proportional scaling of the output.
  • The principle of superposition does not apply.

Some common non-linear function types include:

  • Logarithmic functions like log(x) that grow slowly and asymptotically.
  • Exponential functions like ex that have exponentially increasing outputs.
  • Trigonometric functions like sin(x) and cos(x) that oscillate cyclically.
  • Power functions like xn that have polynomially growing outputs.

Mapping Uniforms to Non-Linears

There are several motivations for mapping uniform distributions to non-linear functions:

  • Modeling real-world phenomena with known statistical patterns.
  • Improving sampling efficiency in areas like Monte Carlo integration.
  • Adding nonlinearity in machine learning for richer representations.

The mathematical technique involves applying the inverse cumulative distribution function (CDF) method. The steps are:

  1. Define source uniform distribution U~Unif(a,b)
  2. Select target non-linear function f(x)
  3. Compute CDF of U: FU(x) = (x-a)/(b-a)
  4. Compute f-1, the inverse of f(x)
  5. Generate sample x~U and compute y = f-1(FU(x))

This maps the original uniform samples x to the target distribution y = f(x). For example, mapping U~Unif(0,1) to an exponential distribution:

import numpy as np

def uniform_to_exponential(sample):
  U = Uniform(0, 1) # Standard uniform
  F_U = sample # CDF is identity 
  f = lambda x: -np.log(1 - x) # Exponential CDF inverse
  return f(F_U)

samples = np.random.rand(10000)  
mapped_samples = uniform_to_exponential(samples) 

The equivalent C++ implementation using standard libraries is:

#include 
#include 

double uniform_to_exponential(double sample) {
  std::uniform_real_distribution<> U(0.0, 1.0); 
  double F_U = sample;
  auto f = [](double x) {
    return -log(1 - x);  
  };
  return f(F_U);
}

int main() {
  std::random_device rd;
  std::mt19937 rng(rd());
  std::vector samples(10000);
  std::generate(samples.begin(), samples.end(), 
                std::bind(U, std::ref(rng)));
              
  for (auto& s : samples) {
    s = uniform_to_exponential(s);
  }
}

Extending the Mapping Methodology

The inverse CDF mapping technique can be extended in several ways:

Handling Multiple Non-Linear Functions

We can map a single uniform distribution U to multiple non-linear target distributions by using their inverse CDFs:

U = Uniform(0, 1)
f1 = Exponential(2.5)  
f2 = LogNormal(1.3, 0.5)

y1 = f1-1(U) # Map U to exponential
y2 = f2-1(U) # Map same U to lognormal

As long as U is the common source, the outputs y1 and y2 will have the desired distributions while remaining correlated through U.

Higher Dimensionality

The methodology extends naturally to higher dimensions like mapping an N-dimensional uniform hypercube to an N-dimensional non-uniform distribution with marginal inverse CDF transforms.

For example, mapping a 2D uniform to a bivariate correlated normal distribution:

U1 = Uniform(-1, 1) 
U2 = Uniform(-1, 1)
Z = BivariateNormal(μ, Σ) 

F1 = CDF_U1(U1)  
F2 = CDF_U2(U2)

Y1 = CDF_Z1-1(F1)
Y2 = CDF_Z2-1(F2 | Y1) # Conditional normal CDF

This allows constructing complex multivariate non-linear distributions from simple uniform building blocks.

Optimization and Efficiency

Computing inverse CDFs can be expensive, so methods like caching, approximation, and hardware acceleration help. Random number generation techniques like Ziggurat algorithms combine table lookup, rejection sampling, and other tricks for faster sampling.

Applying Mapped Non-Linear Distributions

Key use cases for non-uniform distributions generated through inverse CDF mapping include:

Procedural Content Generation

In computer graphics, mapped exponentiated statistical noise can create realistic textures. In games, sampled turbine blade profiles can follow mapped airfoil distribution.

Improving Sampling Quality

Non-linear mapping increases efficiency in Monte Carlo methods. For example, in finance, options pricing integrates nonlinear payoffs over stochastic stocks. Inverse transform sampling matches integrand shape.

Matching Real-World Statistical Patterns

Analyzing real-world datasets often reveals non-uniform patterns. Mapping uniforms to fitted distributions allows realistic generative models and simulations aligned with empirical measurements.

Leave a Reply

Your email address will not be published. Required fields are marked *