Gaussian random fields occur on many branches of physics but are in particular crucial to cosmology. Current theories for the early universe predict a nearly homogeneous universe with tiny energy fluctuations
with the energy density and the mean density , observable in the cosmic microwave background radiation field. After the epoch of recombination, these fluctuations gravitationally collapse to form the cosmic structure we observe today in cosmological redshift surveys. The seed for the structure in our universe turns out to be very close to a realization of a Gaussian random field with a nearly scale invariant power spectrum. I will here summarize a number of key techniques and results in Gaussian random field theory.
Gaussian random fields
Theories for origin of our universe, describe the density perturbation on the space in terms of random fields. Instead of predicting a particular distribution of the energy in the early universe, the theories describe its statistical properties. We will here restrict attention to initial density fluctuations considered as a realization of a Gaussian random field.
The definition
First, note that the density perturbation has zero mean When is a realization of a stationary Gaussian random field, the density perturbation at a point is normally distributed. The spatial correlations are fully charaterized by the two-point correlation function As it turns out, a random field with a Gaussian PDF with zero mean, which is fully charaterized by its two-point correlation function , is uniquely described by the Gaussian distribution with the kernel defined as the inverse of the two-point correlation function with the -dimensional Dirac delta function. Given a set of fields , the probability that is given by the functional integral with the path integral measure. The expectation value of some functional of is given by the integral
The random field at a finite set of points
When we set restricts the density perturbation at a finite number of points for , we can write the distribution as a multi-dimensional Gaussian distribution with the covariance matrix Linear statistics of the random field density perturbation follow an analogous disbribution with the mean and the covariance matrix .
The cosmological principle
It follows from the cosmological principle that the initial conditions of our universe are statistically homogeneous (all points are statistically equivalent) and statistically isotripic (all directions are statistically equivalent). By the homogeneity, the two-point correlation function to only depend on the distance between the points, By the isotropy, the two-point correlation function is in addition only sensitive to the norm of the distrance, In these notes, I will always assume statistical homogeneity and isotropy.
Gaussian random fields in Fourier space
Gaussian random fields are conveniently described in terms of Fourier space. Given the Fourier transform and the inverse transformation the two-point correlation function of the Fourier modes is diagonal with the change of coordinates and and where the power spectrum describes the amplitude corresponding to Fourier modes. Note that we use identity
The power spectrum is the Fourier transform of the correlation function. For two-dimensional random fields with the Bessel function of the first kind and , using the polar coordinate measure . For three-dimensional random fields using the spherical coordiante measure .
Not only are the two-point correlations of the Fourier modes diagonal, the Fourier modes are independent. Since the Fourier modes fully determine the real-space fields, we obtain the distribution of the Fourier modes
Using the definition of the kernel can be written in terms of Fourier space using the convolution theorem yielding for all . Substituting the Fourier transform of the kernel into the distribution, we obtain the simple expression This demonstrates that the amplitudes of the Fourier modes are normally distributed with the standard deviation and the arguments are uniformly distributed. Furthermore note that the Fourier modes are independent modulo the reality condition with .
Generating realizations
In the analysis above, we observe that Gaussian random fields are most easily expressed in terms of Fourier space as the exponent of the distribution is diagonal in . Note that the Fourier transform of a real-valued function satisfies the condition .
Generating a realization of a Gaussian random field on a regular lattice , with the values , can thus be reduced to drawing the normally distributed Fourier modes with the reality condition . The realization are can be efficiently evaluated with the inverse fast Fourier transform.
These observations allow for an efficient algorithm for the generation of unconstrained Gaussian random fields:
- White noise: Generate the set of identically distributed random numbers .
- Fourier transform the noise: Fast Fourier transform the realization of the white noise . The Fourier transform consists of complex random numbers with a Gaussian amplitude , a uniformly distributed argument, satisfying the reality condition .
- Scale the Fourier modes: Define the Fourier modes . The modes are independent complex random numbers of which the amplitudes are normally distributed satisfying the reality condition .
- Transform to real-space: Use the inverse Fourier transform to generate the Gaussian random field .
Generating constrained realizations
When setting up initial conditions for -body simulations, it often suffices to construct an unconstrained Gaussian random fields. However, when we want to study the formation of a specific geometric feature it is most efficient to construct a constrained realization which satisfies a set of linear conditions. We will here follow the Hoffman-Ribak method following the notation of Weygaert and Bertschinger (1996).
We first define the set of linear constraints at the points and values for . Note that the linear constraints are either the function value a derivative or a convolution with some kernel ,
Now using Bayes formula, we write the conditional distribution where since is a linear functional of . Since the constraints are linear in terms of the Gaussian random field, the distribution is again Gaussian, with the vector and the covairance matrix The constrained distribution thus takes the form
It is worthwile to simplify this expression a bit further. We can write the constraint as with the power spectrum , the Fourier transforms and of the kernel and the constraint . If we now define the function by we can obtain a relation between and , Substituting this equation in the constraint , we obtain where we define as the Fourier transform of ,
Let be the th element of . The identity for the constraint can be used to write where we have the mean field We can use this to write the distribution as
where is the deviation of with respect to the constrained mean field . As a consequence, the residue is again a Gaussian random field.
Generating realizations of a constrained Gaussian random field has thus been reduced to constructing a realization of the residue. At first sight, this seems difficult since the residue vanishes at the constraints. However, Hoffmann and Ribak (1991) showed that the residue has an additional property which can be used to generate the residue out of an unconstrained realization. As it turns out, the statistical properties of the residue are independent of the values and only depend on the constraints . This leads to the following method:
- Generate an unconstrained Gaussian random field with the given power spectrum.
- Evaluate the constriants on the realization .
- Construct the mean field corresponding to the values and evaluate the residue
- Add the residue to the mean field to obtain the constrained realization
Geometric statistics
In 1936 Stephen Rice, while working at Bell-Labs, developed a framework to calculate the number densities of points in one-dimensional random fields. In the subsequent years, the results were extended to multi-dimensional random fields.
Rice’s formula: Consider set of points with the smooth conditions of the function and , with . For convenience we drop the the dependence on from the notation, i.e., . When is a homogeneous random field and the set amost always consists of only isolated points, the number density of points in is equal to the expectation value
Proof: Given a continuous and differentiable realization of a the random field, construct the generalized function The number of points in for this realization is given by the integral . Now, since the points in are isolated, every point has a open set containing only this point. We can write the number of points in as using the transformation of the Dirac delta function for a smooth vector-valued function and assuming for only a single point . As the integrand has only support for the point we can write the sum of integrals over as a single integral over . Using these identities, we obtain the number density of points in as the expectation value in a regulating box in the limit , since the expectation value is independent of the position by the statistical homogeneity.
To illustrate Rice’s formula, consider a few examples:
Example 1: The number density of level crossings in a one-dimensional random field correspond to the condition . Using Rice’s formula we obtain We can select for up- and down-crossings by restricting the integral over to and .
Example 2: The number density of critical points in a one-dimensional random field at function value . At a critical point, the first order derivative vanishes, i.e., consider the conditions and . The number density thus takes the form We can refine to the number density of maxima and minima by restricting the integral over to the intervals and .
Example 3: The number density of critical points in a two-dimensional random field at function value . At the critical point, the first order derivatives vanish, i.e., consider the conditions and . The number density thus takes the form We can refine to the number density of maxima, minima, and saddle points by restricting the integration domain to the second order derivatives for which the Hessian as either two negative eigenvalues, two positive eigenvalues, or one negative and one positive eigenvalues, with the eigenvalues
Distribution of eigenvalue fields
Given a Gaussian random field , we evaluate the distribution of the eigenvalues of the Hessian for and dimensions. This is a famous result known as the Doroshkevich formula.
The Hessian of a random field is fully determined by its second order derivatives In the two-dimensional case, the Hessian of the is determined by three degrees of freedom . The eigenvalues with take the form while the eigenvector matrix can be parametrized by the rotation matrix for the orientation .
The derivatives are linear statistics of the random field. The statistic is thus normally distributed. The covariance matrix is of the form with the moment
The distribution of the statistic thus takes the form with the rotationally invariant combinations The invariance of the exponent under rotations follows from the isotropy of the random field.
To obtain the distribution for the eigenvalue fields, we express the linear statistic in terms of the eigenvalues and the orientation. The equation leads to the correspondence
The measure transforms as , leading to the distribution
The distribution of the ordered eigenvalues () of a three-dimensional Gaussian random field, first derived in 1970 by Doroshkevich, follow a similar distribution with the rotationally invariant forms