GPU Parallel For
Alea GPU radically simplifies GPU programming with a highly efficient GPU parallel for method.
It allows to execute a delegate or lambda expression on the GPU. All variables accessed from the
delegate are managed in a closure and automatically transferred between the CPU and the GPU memory.
No GPU programming experience is required to use it. The usage is very similar to the parallel for method
of the .NET Task Parallel Library, just replace the Parallel.For with the Gpu.For method of a GPU instance.
GPU Parallel Aggregate
The Alea GPU parallel aggregate complements the parallel for method. Together they can be used to solve many
computationally intensive problems on the GPU. It uses delegate or lambda expression to aggregate a vector to
a single value. All variables accessed from the delegate are managed in a closure and automatically transferred
between the CPU and the GPU memory. The usage is similar to the .NET aggregate method,
use the aggregate method of a GPU instance with the array to aggregate and the delegate as arguments.
As Fast as Native GPU Code
The GPU code generated by Alea GPU runs as fast as native GPU code developed with CUDA C++ or Fortran.
Offloading compute intensive parts to a GPU can speed up multi-threaded .NET C# or F# applications by a factor of 100 or more.
Use multiple GPUs to further improve performance.
Alea GPU runs on Windows, Linux and Mac OS X as well as on any cloud platform providing GPU powered virtual machines.
The JIT compilation makes platform specific GPU builds unnecessary, which simplifies the build and deployment process.
Automatic Memory Management
Alea GPU simplifies GPU development with automatic memory management, which is a great benefit for novice GPU programmers.
Conventional GPU programming requires that the programmer manages the GPU memory and copies data between the CPU and the GPU explicitly.
Alea GPU can automatically move the data between the CPU and the GPU memory in an economic way, which reduces boiler plate code significantly.
Using .NET Arrays on the GPU
Alea GPU introduces multi-dimensonal .NET arrays also in GPU code, which is an important feature to unify CPU and GPU code.
The array rank and the length of each dimesion are directly accessible from the GPU.
Unifying CPU and GPU Code
The possibility of using delegates with parallel for and parallel aggregate and .NET array in GPU kernel code allows to
unify CPU and GPU code better. Often it is possible to code the core computatation logic in such a way that it can be executed
on the CPU as well as on the GPU.
Alea GPU can be used in Jupyter notebooks or in F# and C# interactive console for GPU scripting and rapid development.
Debugging and Profiling
Alea GPU delivers outstanding developer experience and improves developer productivity with first class tooling for coding, debugging and
profiling fully integrated in Visual Studio.
Expert GPU Programming
For expert GPU developers Alea GPU exposes the CUDA programming model, which allows to use
advanced features, such as special CUDA instructions, shared memory, constant memory, CUDA unified memory and texture.
Libraries and Performance Primitives
Alea GPU has many libraries known from the NVIDIA CUDA C/C++ ecosystems tightly integrated,
such as cuBlas, cuRand, cuDNN. The interfaces of these library functions are extended so that
they can be called directly work with .NET array types.
In addition is ships with a collection of important parallel primitives such as parallel reduce, scan etc.
Quick Start Samples
Our large collection of samples makes it easy to get started with GPU programming. Self-contained projects illustrate all important concepts
and can be taken as a starting point for your own development.