raduloff.dev


home blog

raytracing in the browser is surprisingly uncomplicated (feat. webgpu)

Boris Radulov
published on 2025-02-12T00:00:00.000Z

raytracing?

Raytracing is a really cool technique to render photorealistic images. It works by simulating a camera, the environment, and a metric f*ck-ton of light rays. You “shoot” the rays from the camera and keep track how they bounce off, what surfaces they hit, etc.

oversimplified diagram of raytracing by NVidia fig: oversimplified diagram of raytracing by NVidia

This is more or less the mechanism that most CGI software such as Blender, Maya, etc. use to generate their images (alongside something called Physical Based Rendering to handle the simulation of the surface properties).

Here are some example images, generated by this technique:

"Green Woods (2012)" by major4z, done in Blender fig: “Green Woods (2012)” by major4z, done in Blender

"Stormtrooper (2023)" by me, done in my custom raytracing engine for my bachelor's final thesis fig: “Stormtrooper (2023)” by me, done in my custom raytracing engine for my bachelor’s final thesis

isn’t this super compute-heavy?

Yes. It is.

But it is also an “embarassingly parallel problem” (real term btw).

In parallel computing, an embarrassingly parallel workload or problem is one where little or no effort is needed to split the problem into a number of parallel tasks. This is due to minimal or no dependency upon communication between the parallel tasks, or for results between them.

source: “Embarassingly Parallel” on Wikipaedia

In fact, ray tracing is literally listed as a prime example of such a problem in the first section of the acticle.

A common example of an embarrassingly parallel problem is 3D video rendering handled by a graphics processing unit, where each frame (forward method) or pixel (ray tracing method) can be handled with no interdependency.

source: “Embarassingly Parallel” on Wikipaedia

in the browser???

Like most such problems, you want to do them on a GPU. They have a lot more threads than a CPU and since you don’t need intercommunication between the threads, orchestration is extremely easy. In fact, most GPU APIs nowadays have something called a “compute shader” for exactly those use cases.

Before we get to how, I’d quickly like to answer the “why”.

Like it or not, a lot of things are moving to the browser. The success of tools like Spline shows that 3D modelling is also heading in this direction. It’s only natural that tools like Spline will get a raytracing preview, even if the production renders are offloaded to the cloud.

webgpu baby

In May of 2021 some madmen decided that WebGL is cringe (it is, they were 100% correct) and that we needed a new web-based GPU API that is modern and maps directly to Vulkan, Direct3D and Metal. WebGPU was born.

It has compute shaders out of the box, can render directly to a canvas, complex resource binding for pipelines, etc. It also came with a new shading language called WGSL. You can learn more about it here: https://google.github.io/tour-of-wgsl/. The language is quite similar to other shading languages:

// functions.wgsl
@binding(0) @group(0) var<uniform> frame : u32;
@vertex
fn vtx_main(@builtin(vertex_index) vertex_index : u32) -> @builtin(position) vec4f {
  const pos = array(
    vec2( 0.0,  0.5),
    vec2(-0.5, -0.5),
    vec2( 0.5, -0.5)
  );

  return vec4f(pos[vertex_index], 0, 1);
}

@fragment
fn frag_main() -> @location(0) vec4f {
  return vec4(1, sin(f32(frame) / 128), 0, 1);
}