cuda-0.8.0.0: FFI binding to the CUDA interface for programming NVIDIA GPUs

Copyright[2017] Trevor L. McDonell
LicenseBSD
Safe HaskellNone
LanguageHaskell98

Foreign.CUDA.Driver.Unified

Contents

Description

Unified addressing functions for the low-level driver interface

Overview

CUDA devices can share a unified address space with the host. For these devices, there is no distinction between a device pointer and a host pointer---the same pointer value may be used to access memory from the host program and from a kernel running on the device (with exceptions enumerated below).

Support

Whether or not a device supports unified addressing may be queried by calling attribute with the UnifiedAddressing attribute.

Unified addressing is automatically enabled in 64-bit processes on devices with compute capability at leas 2.0.

Looking up information about pointers

It is possible to look up information about the memory which backs a pointer; that is, whether the memory resides on the host or the device (and in particular, which device).

Automatic mapping of host memory

All host memory allocated in all contexts using mallocHostArray or mallocHostForeignPtr is always directly accessible from all contexts on all devices which support unified addressing. This is the case whether or not the flags Portable or DeviceMapped are specified.

The pointer value through which allocated host memory may be accessed in kernels on all devices which support unified addressing is the same as the pointer value as on the host; that is, it is not necessary to call getDevicePtr for these allocations.

Note that this is not the case for memory allocated using the WriteCombined option; see below.

Automatic registration of peer memory

Upon enabling direct access from a context which supports unified addressing to another peer context which supports unified addressing using add, all memory allocated in the peer context will immediately be accessible by the current context. The device pointer values are the same on both contexts.

Exceptions (disjoint addressing

Not all memory may be accessed on devices through the same pointer value as they are accessed with on the host. These exceptions are host arrays registered with registerArray, and those allocated with the flag WriteCombined. In these cases, the host and device arrays have distinct addresses (pointer values). However, the device address is guaranteed to not overlap with any valid host pointer range and is guaranteed to have the same value across all contexts which support unified addressing.

The value of the device pointer may be queried with getDevicePtr from any context supporting unified addressing.

Synopsis

Querying pointer attributes

Setting pointer attributes

advise :: Storable a => Ptr a -> Int -> Advice -> Maybe Device -> IO () Source #

Advise about the usage of a given range of memory. If the supplied device is Nothing, then the preferred location is taken to mean the CPU.

http://docs.nvidia.com/cuda/cuda-driver-api/group__CUDA__UNIFIED.html#group__CUDA__UNIFIED_1g27608c857a9254789c13f3e3b72029e2

Requires CUDA-8.0.