CUDPP 1.1.1
CUDPP Public Interface

Algorithm Interface

CUDPP_DLL CUDPPResult cudppScan (CUDPPHandle planHandle, void *d_out, const void *d_in, size_t numElements)
 Performs a scan operation of numElements on its input in GPU memory (d_in) and places the output in GPU memory (d_out), with the scan parameters specified in the plan pointed to by planHandle.
CUDPP_DLL CUDPPResult cudppSegmentedScan (CUDPPHandle planHandle, void *d_out, const void *d_idata, const unsigned int *d_iflags, size_t numElements)
 Performs a segmented scan operation of numElements on its input in GPU memory (d_idata) and places the output in GPU memory (d_out), with the scan parameters specified in the plan pointed to by planHandle.
CUDPP_DLL CUDPPResult cudppMultiScan (CUDPPHandle planHandle, void *d_out, const void *d_in, size_t numElements, size_t numRows)
 Performs numRows parallel scan operations of numElements each on its input (d_in) and places the output in d_out, with the scan parameters set by config. Exactly like cudppScan except that it runs on multiple rows in parallel.
CUDPP_DLL CUDPPResult cudppCompact (CUDPPHandle planHandle, void *d_out, size_t *d_numValidElements, const void *d_in, const unsigned int *d_isValid, size_t numElements)
 Given an array d_in and an array of 1/0 flags in deviceValid, returns a compacted array in d_out of corresponding only the "valid" values from d_in.
CUDPP_DLL CUDPPResult cudppSort (CUDPPHandle planHandle, void *d_keys, void *d_values, int keyBits, size_t numElements)
 Sorts key-value pairs or keys only.
CUDPP_DLL CUDPPResult cudppSparseMatrixVectorMultiply (CUDPPHandle sparseMatrixHandle, void *d_y, const void *d_x)
 Perform matrix-vector multiply y = A*x for arbitrary sparse matrix A and vector x.
CUDPP_DLL CUDPPResult cudppRand (CUDPPHandle planHandle, void *d_out, size_t numElements)
 Rand puts numElements random 32-bit elements into d_out.
CUDPP_DLL CUDPPResult cudppRandSeed (const CUDPPHandle planHandle, unsigned int seed)
 Sets the seed used for rand.

Plan Interface

CUDPP_DLL CUDPPResult cudppPlan (CUDPPHandle *planHandle, CUDPPConfiguration config, size_t numElements, size_t numRows, size_t rowPitch)
 Create a CUDPP plan.
CUDPP_DLL CUDPPResult cudppDestroyPlan (CUDPPHandle planHandle)
 Destroy a CUDPP Plan.
CUDPP_DLL CUDPPResult cudppSparseMatrix (CUDPPHandle *sparseMatrixHandle, CUDPPConfiguration config, size_t numNonZeroElements, size_t numRows, const void *A, const unsigned int *h_rowIndices, const unsigned int *h_indices)
 Create a CUDPP Sparse Matrix Object.
CUDPP_DLL CUDPPResult cudppDestroySparseMatrix (CUDPPHandle sparseMatrixHandle)
 Destroy a CUDPP Sparse Matrix Object.

Detailed Description

The CUDA public interface comprises the functions, structs, and enums defined in cudpp.h. Public interface functions call functions in the Application-Level interface. The public interface functions include Plan Interface functions and Algorithm Interface functions. Plan Inteface functions are used for creating CUDPP Plan objects which contain configuration details, intermediate storage space, and in the case of cudppSparseMatrix(), data. The Algorithm Interface is the set of functions that do the real work of CUDPP, such as cudppScan() and cudppSparseMatrixVectorMultiply.


Function Documentation

CUDPP_DLL CUDPPResult cudppScan ( CUDPPHandle  planHandle,
void *  d_out,
const void *  d_in,
size_t  numElements 
)

Performs a scan operation of numElements on its input in GPU memory (d_in) and places the output in GPU memory (d_out), with the scan parameters specified in the plan pointed to by planHandle.

The input to a scan operation is an input array, a binary associative operator (like + or max), and an identity element for that operator (+'s identity is 0). The output of scan is the same size as its input. Informally, the output at each element is the result of operator applied to each input that comes before it. For instance, the output of sum-scan at each element is the sum of all the input elements before that input.

More formally, for associative operator ⊕ , outi = in0in1 ⊕ ... ⊕ ini-1.

CUDPP supports "exclusive" and "inclusive" scans. For the ADD operator, an exclusive scan computes the sum of all input elements before the current element, while an inclusive scan computes the sum of all input elements up to and including the current element.

Before calling scan, create an internal plan using cudppPlan().

After you are finished with the scan plan, clean up with cudppDestroyPlan().

Parameters:
[in]planHandleHandle to plan for this scan
[out]d_outoutput of scan, in GPU memory
[in]d_ininput to scan, in GPU memory
[in]numElementsnumber of elements to scan
See also:
cudppPlan, cudppDestroyPlan

Todo:
Return more specific errors

CUDPP_DLL CUDPPResult cudppSegmentedScan ( CUDPPHandle  planHandle,
void *  d_out,
const void *  d_idata,
const unsigned int *  d_iflags,
size_t  numElements 
)

Performs a segmented scan operation of numElements on its input in GPU memory (d_idata) and places the output in GPU memory (d_out), with the scan parameters specified in the plan pointed to by planHandle.

The input to a segmented scan operation is an input array of data, an input array of flags which demarcate segments, a binary associative operator (like + or max), and an identity element for that operator (+'s identity is 0). The array of flags is the same length as the input with 1 marking the the first element of a segment and 0 otherwise. The output of segmented scan is the same size as its input. Informally, the output at each element is the result of operator applied to each input that comes before it in that segment. For instance, the output of segmented sum-scan at each element is the sum of all the input elements before that input in that segment.

More formally, for associative operator ⊕ , outi = inkink+1 ⊕ ... ⊕ ini-1. k is the index of the first element of the segment in which i lies

We support both "exclusive" and "inclusive" variants. For a segmented sum-scan, the exclusive variant computes the sum of all input elements before the current element in that segment, while the inclusive variant computes the sum of all input elements up to and including the current element, in that segment.

Before calling segmented scan, create an internal plan using cudppPlan().

After you are finished with the scan plan, clean up with cudppDestroyPlan().

Parameters:
[in]planHandleHandle to plan for this scan
[out]d_outoutput of segmented scan, in GPU memory
[in]d_idatainput data to segmented scan, in GPU memory
[in]d_iflagsinput flags to segmented scan, in GPU memory
[in]numElementsnumber of elements to perform segmented scan on
See also:
cudppPlan, cudppDestroyPlan

Todo:
Return more specific errors

CUDPP_DLL CUDPPResult cudppMultiScan ( CUDPPHandle  planHandle,
void *  d_out,
const void *  d_in,
size_t  numElements,
size_t  numRows 
)

Performs numRows parallel scan operations of numElements each on its input (d_in) and places the output in d_out, with the scan parameters set by config. Exactly like cudppScan except that it runs on multiple rows in parallel.

Note that to achieve good performance with cudppMultiScan one should allocate the device arrays passed to it so that all rows are aligned to the correct boundaries for the architecture the app is running on. The easy way to do this is to use cudaMallocPitch() to allocate a 2D array on the device. Use the rowPitch parameter to cudppPlan() to specify this pitch. The easiest way is to pass the device pitch returned by cudaMallocPitch to cudppPlan() via rowPitch.

Parameters:
[in]planHandlehandle to CUDPPScanPlan
[out]d_outoutput of scan, in GPU memory
[in]d_ininput to scan, in GPU memory
[in]numElementsnumber of elements (per row) to scan
[in]numRowsnumber of rows to scan in parallel
See also:
cudppScan, cudppPlan

Todo:
Return more specific errors

CUDPP_DLL CUDPPResult cudppCompact ( CUDPPHandle  planHandle,
void *  d_out,
size_t *  d_numValidElements,
const void *  d_in,
const unsigned int *  d_isValid,
size_t  numElements 
)

Given an array d_in and an array of 1/0 flags in deviceValid, returns a compacted array in d_out of corresponding only the "valid" values from d_in.

Takes as input an array of elements in GPU memory (d_in) and an equal-sized unsigned int array in GPU memory (deviceValid) that indicate which of those input elements are valid. The output is a packed array, in GPU memory, of only those elements marked as valid.

Internally, uses cudppScan.

Example:

 d_in    = [ a b c d e f ]
 deviceValid = [ 1 0 1 1 0 1 ]
 d_out   = [ a c d f ]
Todo:
[MJH] We need to evaluate whether cudppCompact should be a core member of the public interface. It's not clear to me that what the user always wants is a final compacted array. Often one just wants the array of indices to which each input element should go in the output. The split() routine used in radix sort might make more sense to expose.
Parameters:
[in]planHandlehandle to CUDPPCompactPlan
[out]d_outcompacted output
[out]d_numValidElementsset during cudppCompact; is set with the number of elements valid flags in the d_isValid input array
[in]d_ininput to compact
[in]d_isValidwhich elements in d_in are valid
[in]numElementsnumber of elements in d_in

Todo:
Return more specific errors.

CUDPP_DLL CUDPPResult cudppSort ( CUDPPHandle  planHandle,
void *  d_keys,
void *  d_values,
int  keyBits,
size_t  numElements 
)

Sorts key-value pairs or keys only.

Takes as input an array of keys in GPU memory (d_keys) and an optional array of corresponding values, and outputs sorted arrays of keys and (optionally) values in place. Key-value and key-only sort is selected through the configuration of the plan, using the options CUDPP_OPTION_KEYS_ONLY and CUDPP_OPTION_KEY_VALUE_PAIRS.

Supported key types are CUDPP_FLOAT and CUDPP_UINT. Values can be any 32-bit type (internally, values are treated only as a payload and cast to unsigned int).

Todo:
Determine if we need to provide an "out of place" sort interface.
Parameters:
[in]planHandlehandle to CUDPPSortPlan
[out]d_keyskeys by which key-value pairs will be sorted
[in]d_valuesvalues to be sorted
[in]keyBitsthe number of least significant bits in each element of d_keys to sort by
[in]numElementsnumber of elements in d_keys and d_values
See also:
cudppPlan, CUDPPConfiguration, CUDPPAlgorithm

Todo:
Return more specific errors.

CUDPP_DLL CUDPPResult cudppSparseMatrixVectorMultiply ( CUDPPHandle  sparseMatrixHandle,
void *  d_y,
const void *  d_x 
)

Perform matrix-vector multiply y = A*x for arbitrary sparse matrix A and vector x.

Given a matrix object handle (which has been initialized using cudppSparseMatrix()), This function multiplies the input vector d_x by the matrix referred to by sparseMatrixHandle, returning the result in d_y.

Parameters:
sparseMatrixHandleHandle to a sparse matrix object created with cudppSparseMatrix()
d_yThe output vector, y
d_xThe input vector, x
See also:
cudppSparseMatrix, cudppDestroySparseMatrix

Todo:
Return more specific errors.

CUDPP_DLL CUDPPResult cudppRand ( CUDPPHandle  planHandle,
void *  d_out,
size_t  numElements 
)

Rand puts numElements random 32-bit elements into d_out.

Outputs numElements random values to d_out. d_out must be of type unsigned int, allocated in device memory.

The algorithm used for the random number generation is stored in planHandle. Depending on the specification of the pseudo random number generator(PRNG), the generator may have one or more seeds. To set the seed, use cudppRandSeed().

Todo:
Currently only MD5 PRNG is supported. We may provide more rand routines in the future.
Parameters:
[in]planHandleHandle to plan for rand
[in]numElementsnumber of elements in d_out.
[out]d_outoutput of rand, in GPU memory. Should be an array of unsigned integers.
See also:
cudppPlan, CUDPPConfiguration, CUDPPAlgorithm

Todo:
Return more specific errors

CUDPP_DLL CUDPPResult cudppRandSeed ( const CUDPPHandle  planHandle,
unsigned int  seed 
)

Sets the seed used for rand.

The seed is crucial to any random number generator as it allows a sequence of random numbers to be replicated. Since there may be multiple different rand algorithms in CUDPP, cudppRandSeed uses planHandle to determine which seed to set. Each rand algorithm has its own unique set of seeds depending on what the algorithm needs.

Parameters:
[in]planHandlethe handle to the plan which specifies which rand seed to set
[in]seedthe value which the internal cudpp seed will be set to
CUDPP_DLL CUDPPResult cudppPlan ( CUDPPHandle *  planHandle,
CUDPPConfiguration  config,
size_t  numElements,
size_t  numRows,
size_t  rowPitch 
)

Create a CUDPP plan.

A plan is a data structure containing state and intermediate storage space that CUDPP uses to execute algorithms on data. A plan is created by passing to cudppPlan() a CUDPPConfiguration that specifies the algorithm, operator, datatype, and options. The size of the data must also be passed to cudppPlan(), in the numElements, numRows, and rowPitch arguments. These sizes are used to allocate internal storage space at the time the plan is created. The CUDPP planner may use the sizes, options, and information about the present hardware to choose optimal settings.

Note that numElements is the maximum size of the array to be processed with this plan. That means that a plan may be re-used to process (for example, to sort or scan) smaller arrays.

Parameters:
[out]planHandleA pointer to an opaque handle to the internal plan
[in]configThe configuration struct specifying algorithm and options
[in]numElementsThe maximum number of elements to be processed
[in]numRowsThe number of rows (for 2D operations) to be processed
[in]rowPitchThe pitch of the rows of input data, in elements

Todo:
: implement cudppReduce()

CUDPP_DLL CUDPPResult cudppDestroyPlan ( CUDPPHandle  planHandle)

Destroy a CUDPP Plan.

Deletes the plan referred to by planHandle and all associated internal storage.

Parameters:
[in]planHandleThe CUDPPHandle to the plan to be destroyed
CUDPP_DLL CUDPPResult cudppSparseMatrix ( CUDPPHandle *  sparseMatrixHandle,
CUDPPConfiguration  config,
size_t  numNonZeroElements,
size_t  numRows,
const void *  A,
const unsigned int *  h_rowIndices,
const unsigned int *  h_indices 
)

Create a CUDPP Sparse Matrix Object.

The sparse matrix plan is a data structure containing state and intermediate storage space that CUDPP uses to perform sparse matrix dense vector multiply. This plan is created by passing to CUDPPSparseMatrixVectorMultiplyPlan() a CUDPPConfiguration that specifies the algorithm (sprarse matrix-dense vector multiply) and datatype, along with the sparse matrix itself in CSR format. The number of non-zero elements in the sparse matrix must also be passed as numNonZeroElements. This is used to allocate internal storage space at the time the sparse matrix plan is created.

Parameters:
[out]sparseMatrixHandleA pointer to an opaque handle to the sparse matrix object
[in]configThe configuration struct specifying algorithm and options
[in]numNonZeroElementsThe number of non zero elements in the sparse matrix
[in]numRowsThis is the number of rows in y, x and A for y = A * x
[in]AThe matrix data
[in]h_rowIndicesAn array containing the index of the start of each row in A
[in]h_indicesAn array containing the index of each nonzero element in A
CUDPP_DLL CUDPPResult cudppDestroySparseMatrix ( CUDPPHandle  sparseMatrixHandle)

Destroy a CUDPP Sparse Matrix Object.

Deletes the sparse matrix data and plan referred to by sparseMatrixHandle and all associated internal storage.

Parameters:
[in]sparseMatrixHandleThe CUDPPHandle to the matrix object to be destroyed
 All Classes Files Functions Variables Enumerations Enumerator Defines