GPU Programming in Javascript

You probably don’t think about Javascript when you hear the words GPU programming. However, it should come as no surprise that there is a library to do anything and that includes GPU Programming in Javascript. Let’s get one thing clear though, you should probably not do this if you are looking to do serious GPU programming. For serious GPU programming, I recommended that you work with Nvidia’s CUDA in C and C++. See, the only way to get direct access to the GPU in Javascript is through WebGL. That means that anything you do will essentially be converted to pixels and shaders to run on the GPU then converted back to your desired output. Luckily, a library called gpu.js simplifies all of this for us, but it’s still not an ideal situation.

Here’s another thing to keep in mind before you jump in. Just because something can run on the GPU doesn’t mean that it should. The GPU really excels with doing simple things in parallel on large amounts of data. For example, multiplication on very large matrices is a perfect example of the power of GPU processing over the CPU. So let’s implement some simple GPU programming in Javascript using matrix multiplication.

Index HTML

SInce we are designing this to run in the browser, we will need to create a simple index.html file. If you want to work with NodeJS you can, but you will fallback on CPU support because WebGL is currently not accessible in NodeJS, which is required for this library to work. Begin by creating a simple Index.html file and linking the gpu.js library and a custom javascript file that we will write.

<!DOCTYPE html>
 <html lang="en">
 <head>
 <title></title>
 <meta charset="UTF-8">
 <meta name="viewport" content="width=device-width, initial-scale=1">
 <link href="css/style.css" rel="stylesheet">
 </head>
 <body>
 <script src="./gpu.min.js"></script>
 <script src="./gpu.js"></script>
 </body>
 </html>

GPU Programming

Now let’s start writing our first GPU program. We are going to multiple a very large matrix agaisnt itself. Begin by createing an instance of the GPU library.

const gpu = new GPU();

Next, we are going to define a constant that will determine the size of the matrix (length and width).

const SIZE = 1000;

Now, let’s create a 2D array to represent our matrix. I am going to fill this array with random numbers.

const value = [];
for (let i = 0; i < SIZE; i++) {
const row = [];
for (j = 0; j < SIZE; j++) {
row.push(Math.random() * (1000 - 1 + 1) + 1);
}
value.push(row);
}

If we were to print this, you will see a matrix containing 1000 rows and 1000 columns. We are going to multiply this matrix by itself. Now, let’s invoke the kernel function on the GPU.

const matMult = gpu.createKernel(function (a, b) {
let sum = 0;
for (let i = 0; i < size; i++) {
sum += a[this.thread.y][i] * b[i][this.thread.x];
}
return sum;
} , {
constants: { size: SIZE },
output: [SIZE,SIZE],
});

I’m creating a function on the kernel that will traverse two matrices (in this case our same matrix against itself). This.thread.x and this.thread.y refers to the x and y position of each of the matrices. Since this will be running on WebGL, you are limited to the math operations at your disposal. Also, all loops need to be bound. So you can’t do for(i=0;i<n;i+=1), instead of that n either needs to be a hardcoded number or a fixed constant.

Finally, we are declaring the output size of the matrix. Now, let’s log the output.

const result = matMult(value, value);
console.log(result);

When you launch the index.html file and run this you will notice your GPU usage spike for half a second, and then you will have the product of the matrix multiplication. Now, if you tried this exact code using NodeJS (which will keep you CPU bound) you will notice that it will take roughly 10 seconds to solve. That’s a big difference in time from using the GPU.

You can find this code over on GitHub.