Home Projects and Code

Gaussian Random Field Theory

Job Feldbrugge

Gaussian random fields occur on many branches of physics but are in particular crucial to cosmology. Current theories for the early universe predict a nearly homogeneous universe with tiny energy fluctuations

\[ \delta(\boldsymbol{q}) = \frac{\rho(\boldsymbol{q})}{\bar{\rho}} - 1 \]

with the energy density \(\rho\) and the mean density \(\bar{\rho}\), observable in the cosmic microwave background radiation field. After the epoch of recombination, these fluctuations gravitationally collapse to form the cosmic structure we observe today in cosmological redshift surveys. The seed for the structure in our universe turns out to be very close to a realization of a Gaussian random field with a nearly scale invariant power spectrum. I will here summarize a number of key techniques and results in Gaussian random field theory.

Gaussian random fields

Theories for origin of our universe, describe the density perturbation \(\delta:\mathbb{R}^d \to \mathbb{R}\) on the space \(\mathbb{R}^d\) in terms of random fields. Instead of predicting a particular distribution of the energy in the early universe, the theories describe its statistical properties. We will here restrict attention to initial density fluctuations considered as a realization of a Gaussian random field.

The definition

First, note that the density perturbation has zero mean \[ \langle \delta \rangle = \langle \rho(\boldsymbol{q}) \rangle / \bar{\rho} - 1 =0. \] When \(\delta\) is a realization of a stationary Gaussian random field, the density perturbation at a point \(\delta(\boldsymbol{q})\) is normally distributed. The spatial correlations are fully charaterized by the two-point correlation function \[ \xi(\boldsymbol{q}_1,\boldsymbol{q}_2) = \langle \delta(\boldsymbol{q}_1) \delta(\boldsymbol{q}_2) \rangle. \] As it turns out, a random field with a Gaussian PDF with zero mean, which is fully charaterized by its two-point correlation function \(\xi(\boldsymbol{q}_1,\boldsymbol{q}_2)\), is uniquely described by the Gaussian distribution \[ p(\delta) = e^{-\frac{1}{2} \iint \delta(\boldsymbol{q}_1) K(\boldsymbol{q}_1,\boldsymbol{q}_2) \delta(\boldsymbol{q}_2)\mathrm{d}\boldsymbol{q}_1\mathrm{d}\boldsymbol{q}_2}, \] with the kernel \(K\) defined as the inverse of the two-point correlation function \[ \int K(\boldsymbol{q}_1,\boldsymbol{q}) \xi(\boldsymbol{q},\boldsymbol{q}_2) \mathrm{d}\boldsymbol{q} = \delta_D^{(d)}(\boldsymbol{q}_1 - \boldsymbol{q}_2), \] with \(\delta_D^{(d)}\) the \(d\)-dimensional Dirac delta function. Given a set of fields \(S\), the probability that \(\delta \in S\) is given by the functional integral \[ P[\delta \in S] = \int_S e^{-\frac{1}{2} \iint \delta(\boldsymbol{q}_1)K(\boldsymbol{q}_1,\boldsymbol{q}_2) \delta(\boldsymbol{q}_2)\mathrm{d}\boldsymbol{q}_1 \mathrm{d}\boldsymbol{q}_2} \mathcal{D}\delta \] with \(\mathcal{D}\delta\) the path integral measure. The expectation value of some functional \(Q\) of \(\delta\) is given by the integral \[ \langle Q[\delta] \rangle = \int Q[\delta] e^{-\frac{1}{2} \iint \delta(\boldsymbol{q}_1)K(\boldsymbol{q}_1,\boldsymbol{q}_2) \delta(\boldsymbol{q}_2)\mathrm{d}\boldsymbol{q}_1 \mathrm{d}\boldsymbol{q}_2} \mathcal{D}\delta. \]

The random field at a finite set of points

When we set \(S\) restricts the density perturbation at a finite number of points \(\boldsymbol{q}_i\) for \(i=1,2,\dots,n\), we can write the distribution as a multi-dimensional Gaussian distribution \[ p(\delta(\boldsymbol{q}_1), \dots,\delta(\boldsymbol{q}_n)) = \frac{\exp\left[-\frac{1}{2} \sum_{i,j}^n \delta(\boldsymbol{q}_i) (M^{-1})_{ij}\delta(\boldsymbol{q}_j)\right]}{ \sqrt{(2\pi)^n \det M}} \] with the covariance matrix \[ M_{ij} = \xi(\boldsymbol{q}_i, \boldsymbol{q}_j). \] Linear statistics of the random field density perturbation \(Y=(Y_1,\dots,Y_n)\) follow an analogous disbribution \[ p(Y) = \frac{\exp\left[-\frac{1}{2} (Y-\bar{Y})^T M_Y^{-1}(Y-\bar{Y}) \right]} { \sqrt{(2\pi)^n \det M_Y}} \] with the mean \(\bar{Y}=\langle Y \rangle\) and the covariance matrix \(M_Y=\langle Y^T Y \rangle\).

The cosmological principle

It follows from the cosmological principle that the initial conditions of our universe are statistically homogeneous (all points are statistically equivalent) and statistically isotripic (all directions are statistically equivalent). By the homogeneity, the two-point correlation function to only depend on the distance between the points, \[ \xi(\boldsymbol{q}_1,\boldsymbol{q}_2) = \xi(\boldsymbol{q}_1-\boldsymbol{q}_2). \] By the isotropy, the two-point correlation function is in addition only sensitive to the norm of the distrance, \[ \xi(\boldsymbol{q}_1,\boldsymbol{q}_2) = \xi(\|\boldsymbol{q}_1-\boldsymbol{q}_2\|). \] In these notes, I will always assume statistical homogeneity and isotropy.

Gaussian random fields in Fourier space

Gaussian random fields are conveniently described in terms of Fourier space. Given the Fourier transform \[ \hat{\delta}(\boldsymbol{k}) = \int \delta(\boldsymbol{q}) e^{i \boldsymbol{k}\cdot \boldsymbol{q}} \mathrm{d}\boldsymbol{q}, \] and the inverse transformation \[ \delta(\boldsymbol{q}) = \int \hat{\delta}(\boldsymbol{k}) e^{-i \boldsymbol{k}\cdot \boldsymbol{q}} \frac{\mathrm{d}\boldsymbol{k}}{(2\pi)^d}, \] the two-point correlation function of the Fourier modes is diagonal \[\begin{align} \langle \hat{\delta}(\boldsymbol{k}_1) \hat{\delta}^*(\boldsymbol{k}_2)\rangle &= \iint e^{i(\boldsymbol{k}_1 \cdot \boldsymbol{q}_1 - \boldsymbol{k}_2 \cdot \boldsymbol{q}_2)} \langle \delta(\boldsymbol{q}_1) \delta(\boldsymbol{q}_2)\rangle \mathrm{d}\boldsymbol{q}_1 \mathrm{d}\boldsymbol{q}_2 \\ &= \iint e^{i(\boldsymbol{k}_1 \cdot \boldsymbol{q}_1 - \boldsymbol{k}_2 \cdot \boldsymbol{q}_2)} \xi(\|\boldsymbol{q}_1-\boldsymbol{q}_2\|) \mathrm{d}\boldsymbol{q}_1 \mathrm{d}\boldsymbol{q}_2 \\ &= \iint e^{i(\boldsymbol{k}_1 \cdot (\boldsymbol{q}_1-\boldsymbol{q}_2) +(\boldsymbol{k}_1- \boldsymbol{k}_2) \cdot \boldsymbol{q}_2)} \xi(\|\boldsymbol{q}_1-\boldsymbol{q}_2\|) \mathrm{d}\boldsymbol{q}_1 \mathrm{d}\boldsymbol{q}_2 \\ &= \int e^{i(\boldsymbol{k}_1- \boldsymbol{k}_2) \cdot \boldsymbol{q}} \mathrm{d}\boldsymbol{q} \int e^{i \boldsymbol{k}_1 \cdot \boldsymbol{r} } \xi(\|\boldsymbol{r}\|) \mathrm{d}\boldsymbol{r} \\ &=(2\pi)^d \delta_D^{(d)}(\boldsymbol{k}_1-\boldsymbol{k}_2) \int e^{i \boldsymbol{k}_1 \cdot \boldsymbol{r} } \xi(\|\boldsymbol{r}\|) \mathrm{d}\boldsymbol{r} \\ &=(2\pi)^d \delta_D^{(d)}(\boldsymbol{k}_1-\boldsymbol{k}_2) P(\|\boldsymbol{k}_1\|), \end{align}\] with the change of coordinates \(\boldsymbol{r} = \boldsymbol{q}_1-\boldsymbol{q}_2\) and \(\boldsymbol{q}=\boldsymbol{q}_2\) and where the power spectrum \(P(k)\) describes the amplitude corresponding to Fourier modes. Note that we use identity

\[\int e^{i \boldsymbol{k}\cdot \boldsymbol{q}}\mathrm{d}\boldsymbol{q} = (2\pi)^d \delta_D^{(d)}(\boldsymbol{k}).\]

The power spectrum is the Fourier transform of the correlation function. For two-dimensional random fields \[\begin{align} P(k) &= \int e^{i \boldsymbol{k} \cdot \boldsymbol{r} } \xi(\|\boldsymbol{r}\|) \mathrm{d}\boldsymbol{r}\\ &= \int_0^\infty \left[ \int_0^{2\pi} e^{i k r \cos\theta } \mathrm{d}\theta \right]\xi(r) r \mathrm{d}r\\ &= 2\pi \int_0^\infty r J_0(kr)\xi(r) \mathrm{d}r, \end{align}\] with \(J_0\) the Bessel function of the first kind and \(k=\|\boldsymbol{k}\|\), using the polar coordinate measure \(\mathrm{d}\boldsymbol{r} = r \mathrm{d}r \mathrm{d}\theta\). For three-dimensional random fields \[\begin{align} P(k) &= \int e^{i \boldsymbol{k} \cdot \boldsymbol{r} } \xi(\|\boldsymbol{r}\|) \mathrm{d}\boldsymbol{r}\\ &= \int_0^\infty \left[\int_0^{2\pi} e^{i k r \cos \varphi} \mathrm{d}\varphi \int_0^\pi \sin \theta \mathrm{d}\theta\right] \xi(r) r^2 \mathrm{d}r\\ &= 4 \pi \int_0^\infty r^2 J_0(kr) \xi(r) \mathrm{d}r, \end{align}\] using the spherical coordiante measure \(\mathrm{d}\boldsymbol{r} = r^2\sin \theta\mathrm{d}r \mathrm{d}\theta \mathrm{d}\varphi\).

Not only are the two-point correlations of the Fourier modes diagonal, the Fourier modes are independent. Since the Fourier modes fully determine the real-space fields, we obtain the distribution of the Fourier modes \[\begin{align} p(\hat{\delta}) &= \exp\left[ -\frac{1}{2} \iint \delta(\boldsymbol{q}_1) K(\boldsymbol{q}_1 -\boldsymbol{q}_2) \delta(\boldsymbol{q}_2) \mathrm{d}\boldsymbol{q}_1 \mathrm{d}\boldsymbol{q}_2\right]\\ %%%%% &= \exp\left[ -\frac{1}{2} \iint \left[\int \hat{\delta}(\boldsymbol{k}_1) e^{-i\boldsymbol{q}_1 \cdot \boldsymbol{k}_1} \frac{\mathrm{d}\boldsymbol{k}_1}{(2\pi)^d} \right] K(\boldsymbol{q}_1 -\boldsymbol{q}_2) \left[\int \hat{\delta}^*(\boldsymbol{k}_2) e^{i\boldsymbol{q}_2 \cdot \boldsymbol{k}_2} \frac{\mathrm{d}\boldsymbol{k}_2}{(2\pi)^d} \right] \mathrm{d}\boldsymbol{q}_1 \mathrm{d}\boldsymbol{q}_2\right]\\ %%%%% &= \exp\left[ -\frac{1}{2} \iint \hat{\delta}(\boldsymbol{k}_1) \left[ \iint e^{-i\boldsymbol{q}_1\cdot \boldsymbol{k}_1+ i\boldsymbol{q}_2 \cdot \boldsymbol{k}_2} K(\boldsymbol{q}_1 -\boldsymbol{q}_2) \mathrm{d}\boldsymbol{q}_1 \mathrm{d}\boldsymbol{q}_2\right] \hat{\delta}^*(\boldsymbol{k}_2) \frac{\mathrm{d}\boldsymbol{k}_1}{(2\pi)^d} \frac{\mathrm{d}\boldsymbol{k}_2}{(2\pi)^d}\right]\\ %%%%% &= \exp\left[ -\frac{1}{2} \iint \hat{\delta}(\boldsymbol{k}_1) \left[ \iint e^{-i(\boldsymbol{q}_1 - \boldsymbol{q}_2) \cdot \boldsymbol{k}_1+ i\boldsymbol{q}_2 \cdot (\boldsymbol{k}_2-\boldsymbol{k}_1)} K(\boldsymbol{q}_1 -\boldsymbol{q}_2) \mathrm{d}\boldsymbol{q}_1 \mathrm{d}\boldsymbol{q}_2\right] \hat{\delta}^*(\boldsymbol{k}_2) \frac{\mathrm{d}\boldsymbol{k}_1}{(2\pi)^d} \frac{\mathrm{d}\boldsymbol{k}_2}{(2\pi)^d}\right]\\ %%%%% &= \exp\left[ -\frac{1}{2} \iint \hat{\delta}(\boldsymbol{k}_1) \left[ (2\pi)^d \delta_D^{(d)}(\boldsymbol{k}_1 - \boldsymbol{k}_2) \int K(\boldsymbol{r}) e^{i \boldsymbol{r} \cdot \boldsymbol{k}_1} \mathrm{d}\boldsymbol{r}\right] \hat{\delta}^*(\boldsymbol{k}_2) \frac{\mathrm{d}\boldsymbol{k}_1}{(2\pi)^d} \frac{\mathrm{d}\boldsymbol{k}_2}{(2\pi)^d}\right]\\ %%%%% &= \exp\left[ -\frac{1}{2} \int |\hat{\delta}(\boldsymbol{k})|^2 \hat{K}(\boldsymbol{k}) \frac{\mathrm{d}\boldsymbol{k}}{(2\pi)^d} \right]. \end{align}\]

Using the definition of the kernel can be written in terms of Fourier space using the convolution theorem \[\begin{align} \delta_D^{(d)}(\boldsymbol{q}_1-\boldsymbol{q}_2) &=\int K(\boldsymbol{q}_1 - \boldsymbol{q}) \xi(\boldsymbol{q}-\boldsymbol{q}_2) \mathrm{d}\boldsymbol{q}\\ &=\iint K(\boldsymbol{q}_1 - \boldsymbol{q}) P(k) e^{i \boldsymbol{k} \cdot (\boldsymbol{q}-\boldsymbol{q}_2)} \frac{\mathrm{d}\boldsymbol{k} }{(2\pi)^d}\mathrm{d}\boldsymbol{q}\\ &=\int \left[\int K(\boldsymbol{q}_1 - \boldsymbol{q}) e^{-i \boldsymbol{k} \cdot (\boldsymbol{q}_1-\boldsymbol{q})} \mathrm{d}\boldsymbol{q} \right]P(k) e^{i \boldsymbol{k} \cdot (\boldsymbol{q}_1-\boldsymbol{q}_2)} \frac{\mathrm{d}\boldsymbol{k} }{(2\pi)^d}\\ &= \int \hat{K}(\boldsymbol{k})P(k) e^{i \boldsymbol{k} \cdot (\boldsymbol{q}_1-\boldsymbol{q}_2)}\frac{\mathrm{d}\boldsymbol{k}}{(2\pi)^d}, \end{align}\] yielding \(\hat{K}(\boldsymbol{k}) = 1 /P(k)\) for all \(\boldsymbol{k}\). Substituting the Fourier transform of the kernel into the distribution, we obtain the simple expression \[\begin{align} p(\hat{\delta}) = \exp\left[ - \int \frac{|\hat{\delta}(\boldsymbol{k})|^2}{2 P(k)} \frac{\mathrm{d}\boldsymbol{k}}{(2\pi)^d} \right]. \end{align}\] This demonstrates that the amplitudes of the Fourier modes \(|\hat{\delta}|\) are normally distributed with the standard deviation \(\sqrt{P(k)}\) and the arguments \(\text{arg}(\hat{\delta})\) are uniformly distributed. Furthermore note that the Fourier modes are independent modulo the reality condition \[\begin{align} \hat{\delta}^*(-\boldsymbol{k}) =\left[ \int \delta(\boldsymbol{q}) e^{-i\boldsymbol{k}\cdot \boldsymbol{q}}\mathrm{d}\boldsymbol{q}\right]^* = \int \delta(\boldsymbol{q}) e^{i\boldsymbol{k}\cdot \boldsymbol{q}}\mathrm{d}\boldsymbol{q} = \hat{\delta}(\boldsymbol{k}) \end{align}\] with \(\delta(\boldsymbol{q}) = \delta^*(\boldsymbol{q})\).

Generating realizations

In the analysis above, we observe that Gaussian random fields are most easily expressed in terms of Fourier space as the exponent of the distribution is diagonal in \(\hat{\delta}\). Note that the Fourier transform of a real-valued function \(\delta:\mathbb{R}^d \to \mathbb{R}\) satisfies the condition \(\hat{\delta}(\boldsymbol{k}) = \hat{\delta}^*(-\boldsymbol{k})\).

Generating a realization of a Gaussian random field on a regular lattice \(\{ \boldsymbol{q}_{i_1,\dots,i_d}\}\), with the values \(\delta_{i_1,\dots,i_d} = \delta(\boldsymbol{q}_{i_1,\dots,i_d})\), can thus be reduced to drawing the normally distributed Fourier modes \(\hat{\delta}_{i_1,\dots,i_d}\) with the reality condition \(\hat{\delta}^*_{-i_1,\dots,-i_d} = \hat{\delta}_{i_1,\dots,i_d}\). The realization \(\{\delta_{i_1,\dots,i_d}\}\) are can be efficiently evaluated with the inverse fast Fourier transform.

These observations allow for an efficient algorithm for the generation of unconstrained Gaussian random fields:

  1. White noise: Generate the set of identically distributed random numbers \(n_{i_1,\dots,i_d} \sim \mathcal{N}(0,1)\).
  2. Fourier transform the noise: Fast Fourier transform the realization of the white noise \(\{\hat{n}_{i_1,\dots,i_d}\} = FFT[\{n_{i_1,\dots,i_d}\}]\). The Fourier transform consists of complex random numbers with a Gaussian amplitude \(\mathcal{N}(0,1)\), a uniformly distributed argument, satisfying the reality condition \(\hat{n}^*_{-i_1,\dots,-i_d} = \hat{n}_{i_1,\dots,i_d}\).
  3. Scale the Fourier modes: Define the Fourier modes \(\hat{\delta}_{i_1,\dots,i_d} = \sqrt{P(\|\boldsymbol{k}_{i_1,\dots,i_d}\|)}\hat{n}_{i_1,\dots,i_d}\). The modes \(\hat{\delta}_{i_1,\dots,i_d}\) are independent complex random numbers of which the amplitudes are normally distributed \(|\hat{\delta}_{i_1,\dots,i_d}| \sim \mathcal{N}(0,\sqrt{P(\|\boldsymbol{k}_{i_1,\dots,i_d}\|)})\) satisfying the reality condition \(\hat{\delta}^*_{-i_1,\dots,-i_d} = \hat{\delta}_{i_1,\dots,i_d}\).
  4. Transform to real-space: Use the inverse Fourier transform to generate the Gaussian random field \(\{ \delta_{i_1,\dots,i_d}\} = FFT^{-1}(\{ \hat{\delta}_{i_1,\dots,i_d}\})\).

Generating constrained realizations

When setting up initial conditions for \(N\)-body simulations, it often suffices to construct an unconstrained Gaussian random fields. However, when we want to study the formation of a specific geometric feature it is most efficient to construct a constrained realization which satisfies a set of linear conditions. We will here follow the Hoffman-Ribak method following the notation of Weygaert and Bertschinger (1996).

We first define the set of \(M\) linear constraints \[ \Gamma = \{ C_i \equiv C_i[\delta;\boldsymbol{q}_i = c_i| i = 1,\dots,M\}, \] at the points \(\boldsymbol{q}_i\) and values \(c_i \in \mathbb{R}\) for \(i=1,\dots, M\). Note that the linear constraints are either the function value \[C[\delta;\boldsymbol{q}_i] = \delta(\boldsymbol{q}_i),\] a derivative \[C[\delta;\boldsymbol{q}_i] = \frac{\partial \delta}{\partial q_i} (\boldsymbol{q}_i),\] or a convolution with some kernel \(g\), \[C[\delta;\boldsymbol{q}_i] = \int g(\boldsymbol{q}_i - \boldsymbol{q})\delta(\boldsymbol{q}) \mathrm{d}\boldsymbol{q}.\]

Now using Bayes formula, we write the conditional distribution \[ p(\delta|\Gamma) = \frac{p(\delta, \Gamma)}{p(\Gamma)} = \frac{p(\delta)}{p(\Gamma)}, \] where \(p(\delta, \Gamma) = p(\delta)\) since \(\Gamma\) is a linear functional of \(\delta\). Since the constraints \(C_i\) are linear in terms of the Gaussian random field, the distribution \(p(\Gamma)\) is again Gaussian, \[ p(\Gamma) = \frac{\exp\left[-\frac{1}{2} \boldsymbol{C}^T Q^{-1} \boldsymbol{C}\right]}{\sqrt{(2\pi)^M \det Q}}, \] with the vector \(\boldsymbol{C}=(C_1,\dots,C_M)\) and the covairance matrix \[ Q = \left\langle \boldsymbol{C}^T \boldsymbol{C}\right\rangle. \] The constrained distribution thus takes the form \[\begin{align} p(\delta|\Gamma)= \sqrt{(2\pi)^M \det Q} \ e^{-\frac{1}{2} \left(\iint \delta(\boldsymbol{q}_1) K(\boldsymbol{q}_1-\boldsymbol{q}_2) \delta(\boldsymbol{q}_2) \mathrm{d} \boldsymbol{q}_1 \mathrm{d}\boldsymbol{q}_2 - \boldsymbol{C} Q^{-1} \boldsymbol{C}\right)}. \end{align}\]

It is worthwile to simplify this expression a bit further. We can write the constraint as \[\begin{align} C_i(\boldsymbol{q}_i) &= \int \delta_D^{(d)}(\boldsymbol{q}_2) C_i(\boldsymbol{q}_i - \boldsymbol{q}_2) \mathrm{d}\boldsymbol{q}_2\\ &= \iint \xi(\boldsymbol{q}_1) K(\boldsymbol{q}_1 - \boldsymbol{q}_2) C_i(\boldsymbol{q}_i - \boldsymbol{q}_2) \mathrm{d}\boldsymbol{q}_1\mathrm{d}\boldsymbol{q}_2\\ &= \int \hat{C}_i(\boldsymbol{k}) P(k) \hat{K}(\boldsymbol{k}) e^{-i \boldsymbol{k} \cdot \boldsymbol{q}_i} \frac{\mathrm{d}\boldsymbol{k}}{(2\pi)^d}, \end{align}\] with the power spectrum \(P(k)\), the Fourier transforms \(\hat{K}\) and \(\hat{C}_i\) of the kernel \(K\) and the constraint \(C_i\). If we now define the function \(\hat{P}_i(\boldsymbol{k})\) by \[ (2\pi)^d \hat{P}_i(\boldsymbol{k}_1) \delta_D^{(d)}(\boldsymbol{k}_1-\boldsymbol{k}_2) = \langle \hat{C}_i(\boldsymbol{k}_1) \hat{\delta}^*(\boldsymbol{k}_2)\rangle \] we can obtain a relation between \(\hat{\delta}\) and \(\hat{C}_i\), \[ \hat{C}_i(\boldsymbol{k}) = \frac{\hat{P}_i(\boldsymbol{k})}{P(k)} \hat{\delta}(\boldsymbol{k}). \] Substituting this equation in the constraint \(C_i\), we obtain \[\begin{align} C_i(\boldsymbol{q}_i) &= \int \hat{\delta}(\boldsymbol{k}) \hat{P}_i(\boldsymbol{k}) \hat{K}(\boldsymbol{k}) e^{-i \boldsymbol{k} \cdot \boldsymbol{q}_i} \frac{\mathrm{d}\boldsymbol{k}}{(2\pi)^d}\\ &= \int \hat{\delta}(\boldsymbol{k}_1) \hat{K}(\boldsymbol{k}_1) \langle \hat{C}_i(\boldsymbol{k}_1) \hat{\delta}^*(\boldsymbol{k}_2)\rangle e^{-i \boldsymbol{k}_1 \cdot \boldsymbol{q}_i} \frac{\mathrm{d}\boldsymbol{k}_1}{(2\pi)^d}\frac{\mathrm{d}\boldsymbol{k}_2}{(2\pi)^d}\\ &=\iint \delta(\boldsymbol{q}_1) K(\boldsymbol{q}_1-\boldsymbol{q}_2) \xi_i(\boldsymbol{q}_2)\mathrm{d}\boldsymbol{q}_1\mathrm{d}\boldsymbol{q}_2, \end{align}\] where we define \(\xi_i\) as the Fourier transform of \(\hat{P}_i\), \[ \xi_i(\boldsymbol{q}) = \langle \delta(\boldsymbol{q}) C_i(\boldsymbol{q}_i)\rangle. \]

Let \(\xi_{ij}\) be the \((ij)\)th element of \(Q\). The identity for the constraint can be used to write \[ C_i \xi_{ij}^{-1} C_j = \iint \delta(\boldsymbol{q}_1)K(\boldsymbol{q}_1-\boldsymbol{q}_2) \bar{\delta}(\boldsymbol{q}_2)\mathrm{d}\boldsymbol{q}_1 \mathrm{d}\boldsymbol{q}_2, \] where we have the mean field \[ \bar{\delta}(\boldsymbol{q}) = \xi_i(\boldsymbol{q}) \xi_{ij}^{-1} c_j. \] We can use this to write the distribution as
\[\begin{align} p(\delta|\Gamma)= \sqrt{(2\pi)^M \det Q}e^{-\frac{1}{2} \iint F(\boldsymbol{q}_1)K(\boldsymbol{q}_1-\boldsymbol{q}_2) F(\boldsymbol{q}_2)\mathrm{d}\boldsymbol{q}_1\mathrm{d}\boldsymbol{q}_2}, \end{align}\] where \(F\) is the deviation of \(\delta\) with respect to the constrained mean field \(\bar{\delta}(\boldsymbol{q}) = \langle \delta(\boldsymbol{q}|\Gamma \rangle\). As a consequence, the residue \(F\) is again a Gaussian random field.

Generating realizations of a constrained Gaussian random field has thus been reduced to constructing a realization of the residue. At first sight, this seems difficult since the residue vanishes at the constraints. However, Hoffmann and Ribak (1991) showed that the residue has an additional property which can be used to generate the residue out of an unconstrained realization. As it turns out, the statistical properties of the residue are independent of the values \(c_i\) and only depend on the constraints \(C_i\). This leads to the following method:

  1. Generate an unconstrained Gaussian random field \(\tilde{\delta}\) with the given power spectrum.
  2. Evaluate the constriants on the realization \(C_i[\tilde{\delta}] = \tilde{c}_i\).
  3. Construct the mean field corresponding to the values \(\tilde{c}_i\) and evaluate the residue \[F = \tilde{\delta} - \xi_i(\boldsymbol{q}) \xi_{ij}^{-1} \tilde{c}_j.\]
  4. Add the residue \(F\) to the mean field \(\bar{\delta}\) to obtain the constrained realization \[\delta = F + \xi_i(\boldsymbol{q}) \xi_{ij}^{-1} c_j.\]

Geometric statistics

In 1936 Stephen Rice, while working at Bell-Labs, developed a framework to calculate the number densities of points in one-dimensional random fields. In the subsequent years, the results were extended to multi-dimensional random fields.

Rice’s formula: Consider set of points \[ \Lambda(\lambda_1,\dots,\lambda_d) = \{\boldsymbol{q} \in \mathbb{R}^d| f_i[\delta] (\boldsymbol{q}) = \lambda_i\} \] with the \(d\) smooth conditions \(f_i[\delta]:\mathbb{R}^d \to \mathbb{R}\) of the function \(\delta : \mathbb{R}^d\to \mathbb{R}\) and \(\lambda_i \in \mathbb{R}\), with \(i=1,\dots,d\). For convenience we drop the the dependence on \(\delta\) from the notation, i.e., \(f_i=f_i[\delta]\) . When \(\delta\) is a homogeneous random field and the set \(\Lambda(\lambda_1,\dots,\lambda_d)\) amost always consists of only isolated points, the number density \(\mathcal{N}(\lambda_1,\dots,\lambda_d)\) of points in \(\Lambda(\lambda_1,\dots,\lambda_d)\) is equal to the expectation value \[ \mathcal{N}(\lambda_1,\dots,\lambda_d) =\left \langle \left|\det \left[\frac{\partial f_j}{\partial q_i}\right]_{i,j=1,\dots,d}\right| \prod_{i=1}^d \delta_D^{(d)}(f_i(\boldsymbol{q})- \lambda_i)\right\rangle. \]

Proof: Given a continuous and differentiable realization of a the random field, construct the generalized function \[ n(\boldsymbol{q}) = \sum_{\boldsymbol{r} \in \Lambda(\lambda_1,\dots,\lambda_d)} \delta_{D}^{(d)}(\boldsymbol{q}-\boldsymbol{r}). \] The number of points in \(\Lambda(\lambda_1,\dots,\lambda_d)\) for this realization is given by the integral \(\int n(\boldsymbol{q}) \mathrm{d}\boldsymbol{q}\). Now, since the points in \(\Lambda(\lambda_1,\dots,\lambda_d)\) are isolated, every point \(\boldsymbol{r} \in \Lambda(\lambda_1,\dots,\lambda_d)\) has a open set \(U_\boldsymbol{r}\) containing only this point. We can write the number of points in \(\Lambda(\lambda_1,\dots,\lambda_d)\) as \[\begin{align} \int n(\boldsymbol{q}) \mathrm{d}\boldsymbol{q} &= \int \sum_{\boldsymbol{r} \in \Lambda(\lambda_1,\dots,\lambda_d)} \delta_{D}^{(d)}(\boldsymbol{q}-\boldsymbol{r}) \mathrm{d}\boldsymbol{q}\\ &= \sum_{\boldsymbol{r} \in \Lambda(\lambda_1,\dots,\lambda_d)} \int_{U_\boldsymbol{r}} \delta_{D}^{(d)}(\boldsymbol{q}-\boldsymbol{r}) \mathrm{d}\boldsymbol{q}\\ &= \sum_{\boldsymbol{r} \in \Lambda(\lambda_1,\dots,\lambda_d)} \int_{U_\boldsymbol{r}} \left| \det [\partial_i f_j(\boldsymbol{q})]_{i,j} \right| \prod_{i=1}^d\delta_{D}^{(1)}(f_i(\boldsymbol{q})-\lambda_i) \mathrm{d}\boldsymbol{q} \end{align}\] using the transformation of the Dirac delta function \[ \int_{U} \delta_D^{(d)}( \boldsymbol{q} - \boldsymbol{f}^{-1}(\boldsymbol{\lambda})) \mathrm{d}\boldsymbol{q} = \int_U \delta_D^{(d)}(f(\boldsymbol{q}) - \boldsymbol{\lambda}) | \det \nabla \boldsymbol{f}|\mathrm{d}\boldsymbol{q}, \] for a smooth vector-valued function \(\boldsymbol{f}:\mathbb{R}^d \to \mathbb{R}^d\) and \(\boldsymbol{\lambda} \in \mathbb{R}^d\) assuming \(f(\boldsymbol{q}) = \boldsymbol{\lambda}\) for only a single point \(\boldsymbol{q}\in U\). As the integrand has only support for the point \(\boldsymbol{r}\) we can write the sum of integrals over \(U_{\boldsymbol{r}}\) as a single integral over \(\mathbb{R}^d\). Using these identities, we obtain the number density of points in \(\Lambda(\lambda_1,\dots,\lambda_d)\) as the expectation value \[\begin{align} \mathcal{N}(\lambda_1,\dots,\lambda_d) &=\left \langle \frac{\int_L n(\boldsymbol{q}) \mathrm{d}\boldsymbol{q}}{\int_L \mathrm{d}\boldsymbol{q}} \right\rangle\\ &= \frac{\int_L \left\langle \left| \det [\partial_i f_j(\boldsymbol{q})]_{i,j} \right| \prod_{i=1}^d\delta_{D}^{(1)}(f_i(\boldsymbol{q})-\lambda_i)\right\rangle \mathrm{d}\boldsymbol{q}}{\int_L \mathrm{d}\boldsymbol{q}}\\ &=\left \langle \left|\det \left[\frac{\partial f_j}{\partial q_i}\right]_{i,j=1,\dots,d}\right| \prod_{i=1}^d \delta_D^{(d)}(f_i(\boldsymbol{q})- \lambda_i)\right\rangle \end{align}\] in a regulating box \(L \subset \mathbb{R}^d\) in the limit \(L \to \mathbb{R}^d\), since the expectation value is independent of the position by the statistical homogeneity.

To illustrate Rice’s formula, consider a few examples:

Example 1: The number density of level crossings \(\delta = \nu\) in a one-dimensional random field \(\delta\) correspond to the condition \(f_1[\delta] = \delta, \lambda_1=\nu\). Using Rice’s formula we obtain \[\begin{align} \mathcal{N}(\nu) &= \langle |\delta'(q)| \delta_D^{(1)}(\delta(q)-\nu) \rangle\\ &= \int |\delta'(q)| p(\delta = \nu,\delta')\ \mathrm{d} \delta'. \end{align}\] We can select for up- and down-crossings by restricting the integral over \(\delta'\) to \([0,\infty)\) and \((-\infty,0]\).

Example 2: The number density of critical points in a one-dimensional random field \(\delta\) at function value \(\nu\in \mathbb{R}\). At a critical point, the first order derivative vanishes, i.e., consider the conditions \(f_1[\delta] = \delta'\) and \(\lambda_1=0\). The number density thus takes the form \[\begin{align} \mathcal{N}(\nu) &= \langle \left| \delta''(q)\right| \delta_D^{(1)}(\delta(q) - \nu)\delta_D^{(1)}(\delta'(q)) \rangle\\ &= \int |\delta''| p(\delta =\nu, \delta' = 0,\delta'') \ \mathrm{d}\delta''. \end{align}\] We can refine to the number density of maxima and minima by restricting the integral over \(\delta''\) to the intervals \((-\infty,0]\) and \([0,\infty)\).

Example 3: The number density of critical points in a two-dimensional random field \(\delta\) at function value \(\nu \in \mathbb{R}\). At the critical point, the first order derivatives vanish, i.e., consider the conditions \(f_1[\delta] = \partial_1 \delta, f_2[\delta] = \partial_2 \delta\) and \(\lambda_1=\lambda_2 = 0\). The number density thus takes the form \[\begin{align} \mathcal{N}(\nu) &= \left\langle \left| \det \begin{pmatrix} \partial_1^2 \delta & \partial_1 \partial_2 \delta \\ \partial_1 \partial_2 \delta & \partial_2^2 \delta \end{pmatrix} \right| \delta_D^{(1)}(\delta - \nu)\delta_D^{(1)}(\partial_1 \delta) \delta_D^{(1)}(\partial_2\delta) \right \rangle\\ &= \iiint |\partial_1^2 \delta\ \partial_2^2 \delta -(\partial_1 \partial_2 \delta)^2| p(\delta = \nu, \nabla \delta = 0, \partial_1^2 \delta, \partial_1 \partial_2 \delta, \partial_2^2 \delta) \ \mathrm{d}(\partial_1^2 \delta)\ \mathrm{d}(\partial_1 \partial_2 \delta)\ \mathrm{d} (\partial_2^2 \delta). \end{align}\] We can refine to the number density of maxima, minima, and saddle points by restricting the integration domain to the second order derivatives for which the Hessian \(H\) as either two negative eigenvalues, two positive eigenvalues, or one negative and one positive eigenvalues, with the eigenvalues \[ \lambda_{\pm} = \frac{1}{2} \left[ \partial_1^2 \delta + \partial_2^2 \delta \pm \sqrt{4 (\partial_1 \partial_2 \delta)^2 + (\partial_1^2 \delta - \partial_2^2 \delta)^2}\right]. \]

Distribution of eigenvalue fields

Given a Gaussian random field \(\delta:\mathbb{R}^d \to \mathbb{R}\), we evaluate the distribution of the eigenvalues of the Hessian for \(d=2\) and \(d=3\) dimensions. This is a famous result known as the Doroshkevich formula.

The Hessian of a random field is fully determined by its second order derivatives \[ T_{ij} = \frac{\partial^2 \delta}{\partial q_i \partial q_j}. \] In the two-dimensional case, the Hessian of the \(\delta\) is determined by three degrees of freedom \(Y=(T_{11},T_{22},T_{12})\). The eigenvalues with \(\lambda_1 \geq \lambda_2\) take the form \[\begin{align} \lambda_1 &= \frac{1}{2} \left( T_{11} + T_{22} + \sqrt{4T_{12}^2 + (T_{11} - T_{22})^2}\right),\\ \lambda_2 &= \frac{1}{2} \left( T_{11} + T_{22} - \sqrt{4T_{12}^2 + (T_{11} - T_{22})^2}\right), \end{align}\] while the eigenvector matrix can be parametrized by the rotation matrix \[\begin{align} V = \begin{pmatrix} \cos \omega & \sin \omega \\ -\sin \omega & \cos \omega\end{pmatrix} \end{align}\] for the orientation \(\omega \in (-\pi/2,\pi/2]\).

The derivatives are linear statistics of the random field. The statistic \(Y\) is thus normally distributed. The covariance matrix is of the form \[\begin{align} M = \frac{\sigma_2^2}{8} \begin{pmatrix} 3 & 1 & 0 \\ 1 & 3 & 0 \\ 0 & 0 & 1\end{pmatrix} \end{align}\] with the moment \[\begin{align} \sigma_2^2 = \int k^4 P(k)\mathrm{d}\boldsymbol{k}. \end{align}\]

The distribution of the statistic thus takes the form \[ p(T_{11},T_{22},T_{12}) = \frac{2\sqrt{2}}{\pi^{3/2} \sigma_2^3} e^{-\frac{3 k_1^2 - 8 k_2}{2 \sigma_2^2}} \] with the rotationally invariant combinations \[\begin{align} k_1 &\equiv \text{Tr}(H) = T_{11}+T_{22} = \lambda_1 + \lambda_2,\\ k_2 &\equiv \det(H) = T_{11}T_{22}-T_{12}^2 = \lambda_1 \lambda_2. \end{align}\] The invariance of the exponent under rotations follows from the isotropy of the random field.

To obtain the distribution for the eigenvalue fields, we express the linear statistic \(Y\) in terms of the eigenvalues and the orientation. The equation \[\begin{align} \begin{pmatrix} T_{11} & T_{12} \\ T_{12} & T_{22}\end{pmatrix} = V \begin{pmatrix} \lambda_1 & 0 \\ 0 & \lambda_2 \end{pmatrix} V^T \end{align}\] leads to the correspondence \[ (T_{11},T_{22},T_{12}) = (\lambda_1 \cos^2 \omega + \lambda_2 \sin^2 \omega, \lambda_1 \sin^2\omega + \lambda_2 \cos^2 \omega, (\lambda_2-\lambda_1) \cos \omega \sin \omega). \]

The measure transforms as \(\mathrm{d}T_{11} \wedge \mathrm{d}T_{22} \wedge \mathrm{d}T_{12} = (\lambda_1 - \lambda_2) \mathrm{d}\lambda_1 \wedge \mathrm{d}\lambda_2 \wedge \mathrm{d} \omega\), leading to the distribution

\[ p(\lambda_1,\lambda_2) = \frac{2\sqrt{2}}{\pi^{1/2} \sigma_2^3} e^{-\frac{3 k_1^2 - 8 k_2}{2 \sigma_2^2}}(\lambda_1-\lambda_2). \]

The distribution of the ordered eigenvalues (\(\lambda_1 \geq \lambda_2 \geq \lambda_3\)) of a three-dimensional Gaussian random field, first derived in 1970 by Doroshkevich, follow a similar distribution \[ p(\lambda_1,\lambda_2,\lambda_3) = \frac{15^3}{16\sqrt{5}\pi^3}e^{-\frac{3 (2k_1^2 - 5 k_2)}{2\sigma_2^2}}(\lambda_1-\lambda_2)(\lambda_1 - \lambda_3)(\lambda_2 - \lambda_3), \] with the rotationally invariant forms \[\begin{align} k_1 &\equiv \lambda_1 + \lambda_2 + \lambda_3,\\ k_2 &\equiv \lambda_1 \lambda_2 + \lambda_1 \lambda_3 + \lambda_2 \lambda_3. \end{align}\]