rp_pi_tpcf_jackknife

halotools.mock_observables.rp_pi_tpcf_jackknife(sample1, randoms, rp_bins, pi_bins, Nsub=[5, 5, 5], sample2=None, period=None, do_auto=True, do_cross=True, estimator='Natural', num_threads=1, seed=None, approx_cell1_size=None, approx_cell2_size=None, approx_cellran_size=None)[source]

redshift space correlation function, \(\xi(r_{p}, \pi)\) and the covariance matrix, \({C}_{ij}\), between ith and jth bin.

The covariance matrix is calculated using spatial jackknife sampling of the data volume. The spatial samples are defined by splitting the box along each dimension, N times, set by the Nsub argument.

Example calls to this function appear in the documentation below. See the Formatting your xyz coordinates for Mock Observables calculations documentation page for instructions on how to transform your coordinate position arrays into the format accepted by the sample1 and sample2 arguments.

Parameters:
sample1array_like

Npts1 x 3 numpy array containing 3-D positions of points. See the Formatting your xyz coordinates for Mock Observables calculations documentation page, or the Examples section below, for instructions on how to transform your coordinate position arrays into the format accepted by the sample1 and sample2 arguments. Length units are comoving and assumed to be in Mpc/h, here and throughout Halotools.

randomsarray_like

Nran x 3 array containing 3-D positions of randomly distributed points.

rp_binsarray_like

array of boundaries defining the radial bins perpendicular to the LOS in which pairs are counted. Length units are comoving and assumed to be in Mpc/h, here and throughout Halotools.

pi_binsarray_like

array of boundaries defining the p radial bins parallel to the LOS in which pairs are counted. Length units are comoving and assumed to be in Mpc/h, here and throughout Halotools.

Nsubarray_like, optional

Lenght-3 numpy array of number of divisions along each dimension defining jackknife sample subvolumes. If single integer is given, it is assumed to be equivalent for each dimension. The total number of samples used is then given by numpy.prod(Nsub). Default is 5 divisions per dimension.

sample2array_like, optional

Npts2 x 3 array containing 3-D positions of points. Passing sample2 as an input permits the calculation of the cross-correlation function. Default is None, in which case only the auto-correlation function will be calculated.

periodarray_like, optional

Length-3 sequence defining the periodic boundary conditions in each dimension. If you instead provide a single scalar, Lbox, period is assumed to be the same in all Cartesian directions. If set to None (the default option), PBCs are set to infinity. Length units are comoving and assumed to be in Mpc/h, here and throughout Halotools.

do_autoboolean, optional

Boolean determines whether the auto-correlation function will be calculated and returned. Default is True.

do_crossboolean, optional

Boolean determines whether the cross-correlation function will be calculated and returned. Only relevant when sample2 is also provided. Default is True for the case where sample2 is provided, otherwise False.

estimatorstring, optional

Statistical estimator for the tpcf. Options are ‘Natural’, ‘Davis-Peebles’, ‘Hewett’ , ‘Hamilton’, ‘Landy-Szalay’ Default is ‘Natural’.

num_threadsint, optional

Number of threads to use in calculation, where parallelization is performed using the python multiprocessing module. Default is 1 for a purely serial calculation, in which case a multiprocessing Pool object will never be instantiated. A string ‘max’ may be used to indicate that the pair counters should use all available cores on the machine.

approx_cell1_sizearray_like, optional

Length-3 array serving as a guess for the optimal manner by how points will be apportioned into subvolumes of the simulation box. The optimum choice unavoidably depends on the specs of your machine. Default choice is to use Lbox/10 in each dimension, which will return reasonable result performance for most use-cases. Performance can vary sensitively with this parameter, so it is highly recommended that you experiment with this parameter when carrying out performance-critical calculations.

approx_cell2_sizearray_like, optional

Analogous to approx_cell1_size, but for sample2. See comments for approx_cell1_size for details.

approx_cellran_sizearray_like, optional

Analogous to approx_cell1_size, but for randoms. See comments for approx_cell1_size for details.

seedint, optional

Random number seed used to randomly downsample data, if applicable. Default is None, in which case downsampling will be stochastic.

Returns:
correlation_function(s)numpy.ndarray

len(rp_bins)-1 by len(pi_bins)-1 ndarray containing the correlation function \(\xi(r_p, \pi)\) computed in each of the bins defined by input rp_bins and pi_bins.

\[1 + \xi(r_{p},\pi) = \mathrm{DD}r_{p},\pi) / \mathrm{RR}r_{p},\pi)\]

if estimator is set to ‘Natural’, where \(\mathrm{DD}(r_{p},\pi)\) is calculated by the pair counter, and \(\mathrm{RR}(r_{p},\pi)\) is counted internally using “analytic randoms” if randoms is set to None (see notes for further details).

If sample2 is passed as input (and not exactly the same as sample1), three arrays of shape len(rp_bins)-1 by len(pi_bins)-1 are returned:

\[\xi_{11}(r_{p},\pi), \xi_{12}(r_{p},\pi), \xi_{22}(r_{p},\pi),\]

the autocorrelation of sample1, the cross-correlation between sample1 and sample2, and the autocorrelation of sample2, respectively. If do_auto or do_cross is set to False, the appropriate result(s) are returned.

cov_matrix(ices)numpy.ndarray

[len(rp_bins)-1] times [len(pi_bins)-1] by [len(rp_bins)-1] times [len(pi_bins)-1] ndarray containing the covariance matrix \(C_{ij}\)

If sample2 is passed as input three ndarrays of shape [len(rp_bins)-1] times [len(pi_bins)-1] by [len(rp_bins)-1] times [len(pi_bins)-1] are returned:

\[C^{11}_{ij}, C^{12}_{ij}, C^{22}_{ij},\]

the associated covariance matrices of \(\xi_{p 11}(r_p, \pi), \xi_{p 12}(r_p, \pi), \xi_{p 22}(r_p, \pi)\). If do_auto or do_cross is set to False, the appropriate result(s) is not returned.

Notes

The jackknife sampling of pair counts is done internally in npairs_jackknife_xy_z.

Pairs are counted such that when ‘removing’ subvolume \(k\), and counting a pair in subvolumes \(i\) and \(j\):

\[\begin{split}D_i D_j += \left \{ \begin{array}{ll} 1.0 & : i \neq k, j \neq k \\ 0.5 & : i \neq k, j=k \\ 0.5 & : i = k, j \neq k \\ 0.0 & : i=j=k \\ \end{array} \right.\end{split}\]

The returned covariance matrix is 2-D. The indices of the matrix are in row-major order. To access the covariance between the (ith rp_bin and the jth pi_bin) and the (kth rp_bin and the lth pi_bin) of the covariance matrix C, sigma2 = C[Npi_bins*i+j,Npi_bins*k+l] where Npi_bins = len(pi_bins)-1

Examples

For demonstration purposes we create a randomly distributed set of points within a periodic cube of box length Lbox = 250 Mpc/h.

>>> Npts = 1000
>>> Lbox = 100.
>>> x = np.random.uniform(0, Lbox, Npts)
>>> y = np.random.uniform(0, Lbox, Npts)
>>> z = np.random.uniform(0, Lbox, Npts)

We transform our x, y, z points into the array shape used by the pair-counter by taking the transpose of the result of numpy.vstack. This boilerplate transformation is used throughout the mock_observables sub-package:

>>> coords = np.vstack((x,y,z)).T

Create some ‘randoms’ in the same way:

>>> Nran = Npts*3
>>> xran = np.random.uniform(0, Lbox, Nran)
>>> yran = np.random.uniform(0, Lbox, Nran)
>>> zran = np.random.uniform(0, Lbox, Nran)
>>> randoms = np.vstack((xran,yran,zran)).T

Calculate the jackknife covariance matrix by dividing the simulation box into 3 samples per dimension (for a total of 3^3 total jackknife samples):

>>> rp_bins = np.logspace(0.5, 1.5, 8)
>>> pi_bins = np.logspace(0.5, 1.5, 8)
>>> xi, xi_cov = rp_pi_tpcf_jackknife(coords, randoms, rp_bins, pi_bins, Nsub=3, period=Lbox)

To get the standard deviation in each bin of the correlation function >>> sigma = np.sqrt(np.diagonal(xi_cov)).reshape(len(rp_bins)-1,len(pi_bins)-1)