Brick Library 0.1
Performance-portable stencil datalayout & codegen
Classes | Macros | Enumerations | Functions
brick-dpc.h File Reference

This file is largely used to give the DPCPP interface a similar footprint to CUDA and HIP so that it can easily be used with brick-gpu.h. More...

#include <CL/sycl.hpp>
#include "brick.h"
#include "brick-gpu.h"
Include dependency graph for brick-dpc.h:

Go to the source code of this file.

Classes

struct  DPCBrick< Dim< BDims... >, Dim< Folds... > >
 Brick data structure. More...
 

Macros

#define gpuMalloc(p, s)   dpc_malloc(p, s)
 
#define gpuMemcpy(d, p, s, k)   dpc_memcpy(d, p, s, k)
 
#define gpuFree(p)   dpc_free(p)
 
#define gpuMemcpyKind   dpcMemcpyKind
 
#define gpuMemcpyHostToDevice   dpcMemcpyHostToDevice
 
#define gpuMemcpyDeviceToHost   dpcMemcpyDeviceToHost
 
#define gpuSuccess   dpc_success
 
#define gpuGetErrorString(e)   dpc_get_error_string(e)
 

Enumerations

enum  dpcError_t { dpc_success , memcpy_failed , malloc_failed }
 
enum  dpcMemcpyKind { dpcMemcpyHostToDevice , dpcMemcpyDeviceToHost }
 

Functions

template<typename T >
dpcError_t dpc_malloc (T **buffer, size_t size)
 Interface to allocate memory on a DPCPP GPU with a similar footprint to CUDA. More...
 
template<typename T >
dpcError_t dpc_memcpy (T *dst, T *ptr, size_t size, dpcMemcpyKind type)
 Copy data from host to DPCPP GPU. If data must be returned to the host after kernel execution, a buffer/accessor pattern is preffered. More...
 
template<typename T >
dpcError_t dpc_free (T *ptr)
 Free allocated memory on a DPCPP GPU. More...
 

Detailed Description

This file is largely used to give the DPCPP interface a similar footprint to CUDA and HIP so that it can easily be used with brick-gpu.h.

Author
Samantha Hirsch
Date
2021-07-22

Macro Definition Documentation

◆ gpuFree

#define gpuFree (   p)    dpc_free(p)

◆ gpuGetErrorString

#define gpuGetErrorString (   e)    dpc_get_error_string(e)

◆ gpuMalloc

#define gpuMalloc (   p,
 
)    dpc_malloc(p, s)

◆ gpuMemcpy

#define gpuMemcpy (   d,
  p,
  s,
 
)    dpc_memcpy(d, p, s, k)

◆ gpuMemcpyDeviceToHost

#define gpuMemcpyDeviceToHost   dpcMemcpyDeviceToHost

◆ gpuMemcpyHostToDevice

#define gpuMemcpyHostToDevice   dpcMemcpyHostToDevice

◆ gpuMemcpyKind

#define gpuMemcpyKind   dpcMemcpyKind

◆ gpuSuccess

#define gpuSuccess   dpc_success

Enumeration Type Documentation

◆ dpcError_t

enum dpcError_t
Enumerator
dpc_success 
memcpy_failed 
malloc_failed 

◆ dpcMemcpyKind

Enumerator
dpcMemcpyHostToDevice 
dpcMemcpyDeviceToHost 

Function Documentation

◆ dpc_free()

template<typename T >
dpcError_t dpc_free ( T *  ptr)
inline

Free allocated memory on a DPCPP GPU.

Template Parameters
TThe type of the data being freed, usually implicit.
Parameters
ptrPointer to the previously allocated memory.
Returns
dpc_success (alias of gpuSuccess) if memory was successfully allocated.

◆ dpc_malloc()

template<typename T >
dpcError_t dpc_malloc ( T **  buffer,
size_t  size 
)
inline

Interface to allocate memory on a DPCPP GPU with a similar footprint to CUDA.

Template Parameters
TType of data to be store in the buffer, usually implicit.
Parameters
bufferWhere to store the pointer to the allocated memory.
sizeSize of allocated memory in bytes.
Returns
dpc_success (alias of gpuSuccess) if memory was successfully allocated.

◆ dpc_memcpy()

template<typename T >
dpcError_t dpc_memcpy ( T *  dst,
T *  ptr,
size_t  size,
dpcMemcpyKind  type 
)
inline

Copy data from host to DPCPP GPU. If data must be returned to the host after kernel execution, a buffer/accessor pattern is preffered.

Template Parameters
TThe type of data being copied, usually implicit.
Parameters
dstPointer to the destination on the GPU.
ptrPointer to the data source on the host.
sizeThe size of the data copy in bytes.
typeCurrently may only be dpcMemcpyHostToDevice (alias of gpuMemcpyHostToDevice)
Returns
dpc_success (alias of gpuSuccess) if memory was successfully allocated.