Skip to content

Commit ef1f059

Browse files
committed
Merge branch 'release'
2 parents 7333a89 + f26eb35 commit ef1f059

9 files changed

Lines changed: 360 additions & 9 deletions

File tree

apps/typegpu-docs/astro.config.mjs

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -156,18 +156,18 @@ export default defineConfig({
156156
// },
157157
]),
158158
},
159-
DEV && {
159+
{
160160
label: 'Ecosystem',
161161
items: stripFalsy([
162162
{
163163
label: '@typegpu/noise',
164164
slug: 'ecosystem/typegpu-noise',
165165
},
166-
{
166+
DEV && {
167167
label: '@typegpu/color',
168168
slug: 'ecosystem/typegpu-color',
169169
},
170-
{
170+
DEV && {
171171
label: 'Third-party',
172172
slug: 'ecosystem/third-party',
173173
},

apps/typegpu-docs/src/content/docs/ecosystem/typegpu-noise.mdx

Lines changed: 231 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -2,3 +2,234 @@
22
title: "@typegpu/noise"
33
---
44

5+
The `@typegpu/noise` package offers a set of pseudo-random utilities for use in TypeGPU and WebGPU projects. At its core, the package provides a
6+
pseudo-random number generator for uniformly distributed values (same probability for all numbers) in the range `[0, 1)`, as well as higher-level
7+
utilities built on top.
8+
9+
It also features a [Perlin noise](#perlin-noise) implementation, which is useful for generating smooth, natural-looking variations in visual
10+
effects, terrains, and other procedural elements.
11+
12+
:::note
13+
Threads do not share the generator's `State`. As a result, unless you change the seed or provide thread-dependent variables, each thread will produce the same sequence of sampled values.
14+
:::
15+
16+
## Use with either TypeGPU or WebGPU
17+
18+
Each utility function described in this guide is usable from the context of both TypeGPU and vanilla WebGPU. This makes it really simple to
19+
leverage the TypeGPU ecosystem in your WebGPU projects, without needing to migrate large parts of your codebase.
20+
21+
### TypeGPU
22+
23+
Calling utility functions from [TypeGPU functions](/TypeGPU/fundamentals/functions/) links them automatically.
24+
In the example below, resolving `randomVec2f` into a shader will include the code for `randf.sample` and all of its dependencies.
25+
26+
```ts twoslash
27+
import tgpu from 'typegpu';
28+
import * as d from 'typegpu/data';
29+
// ---cut---
30+
import { randf } from '@typegpu/noise';
31+
32+
const randomVec2f = tgpu.fn([], d.vec2f)(() => {
33+
const x = randf.sample(); // returns a random float in [0, 1)
34+
const y = randf.sample(); // returns the next random float in [0, 1)
35+
return d.vec2f(x, y);
36+
});
37+
38+
// ...
39+
```
40+
41+
### WebGPU
42+
43+
The `tgpu.resolve` API can be used to inject TypeGPU resources (constants, functions, etc.) into a WGSL shader.
44+
45+
In the example below, the `sample` function is accessed both as a named function, and as part of the `randf` object.
46+
The resolution mechanism handles deduplication out of the box, as well as omits code that is unused by your shader,
47+
so only one definition of `sample` will be included in the final shader.
48+
49+
```ts twoslash
50+
import * as d from 'typegpu/data';
51+
// ---cut---
52+
import { randf } from '@typegpu/noise';
53+
// `typegpu` is necessary to inject library code into your custom shader
54+
import tgpu from 'typegpu';
55+
56+
const shader = tgpu.resolve({
57+
template: `
58+
fn random_vec2f() -> vec2f {
59+
// Accessing the 'sample' function directly
60+
let x = sample();
61+
// Accessing the 'sample' function as part of the 'randf' object
62+
let y = randf.sample();
63+
return vec2f(x, y);
64+
}
65+
66+
// ...
67+
`,
68+
externals: { sample: randf.sample, randf },
69+
});
70+
71+
// The shader is just a WGSL string
72+
shader;
73+
// ^?
74+
```
75+
76+
Does this mean we allow object access inside of WGSL shaders?... yes, yes we do 🙈. [To learn more about resolution, check our "Resolve" guide](/TypeGPU/fundamentals/resolve/)
77+
78+
## Pseudo-random number generator
79+
80+
The `@typegpu/noise` package provides a pseudo-random number generator (PRNG) that generates uniformly distributed random numbers in the range `[0, 1)`.
81+
Each call to `randf.sample` returns the next random float in the sequence, allowing for predictable and repeatable results. The seed can be set or reset
82+
using a set of `randf.seedN` functions, where `N` is the number of components our seed has.
83+
```ts twoslash
84+
import tgpu from 'typegpu';
85+
import * as d from 'typegpu/data';
86+
import { randf } from '@typegpu/noise';
87+
88+
const main = tgpu['~unstable'].fragmentFn({
89+
in: { pos: d.builtin.position },
90+
out: d.vec4f,
91+
})(({ pos }) => {
92+
randf.seed2(pos.xy); // Generate a different sequence for each pixel
93+
94+
return d.vec4f(
95+
randf.sample(), // returns a random float in [0, 1)
96+
randf.sample(), // returns the next random float in [0, 1)
97+
0.0,
98+
1.0
99+
);
100+
});
101+
```
102+
103+
There are higher-level utilities built on top of `randf.sample`:
104+
- `inUnitCircle` - returns a random 2D vector uniformly distributed inside a unit circle
105+
- `inUnitCube` - returns a random 3D vector uniformly distributed inside a unit cube
106+
- `onHemisphere` - returns a random 3D vector uniformly distributed on the surface of the upper hemisphere oriented accordingly to given normal vector
107+
- `onUnitSphere` - returns a random 3D vector uniformly distributed on the surface of a unit sphere
108+
109+
## Perlin noise
110+
111+
The package exports an implementation for both 2D and 3D Perlin noise, `perlin2d` and `perlin3d`, respectively.
112+
Using it is as simple as calling the `.sample` function with the desired coordinates, and it returns a value in the range `[-1, 1]`.
113+
114+
```ts twoslash
115+
import tgpu from 'typegpu';
116+
import * as d from 'typegpu/data';
117+
// ---cut---
118+
import { perlin2d } from '@typegpu/noise';
119+
120+
const main = tgpu['~unstable'].fragmentFn({
121+
in: { pos: d.builtin.position },
122+
out: d.vec4f,
123+
})(({ pos }) => {
124+
const noise = perlin2d.sample(pos.xy.mul(0.1)); // Scale the coordinates for smoother noise
125+
return d.vec4f(noise, noise, noise, 1); // Use the noise value for RGB channels
126+
});
127+
```
128+
129+
This simple usage is enough for most cases, but by default, `perlin.sample` computes the underlying gradients on-demand, per pixel.
130+
This can be inefficient for large images or when the same noise is sampled multiple times.
131+
To improve performance, you can precompute the gradients using either a *Static* or a *Dynamic* cache. **In our tests, the efficiency gain can be up to 10x!**
132+
133+
### Static cache
134+
A static cache presumes that the domain of the noise function is fixed, and cannot change between shader invocations.
135+
136+
```ts twoslash
137+
import tgpu from 'typegpu';
138+
import * as d from 'typegpu/data';
139+
const root = await tgpu.init();
140+
import { perlin3d } from '@typegpu/noise';
141+
142+
const main = tgpu['~unstable'].computeFn({ workgroupSize: [1] })(() => {
143+
const value = perlin3d.sample(d.vec3f(0.5, 0, 0));
144+
const wrappedValue = perlin3d.sample(d.vec3f(10.5, 0, 0)); // the same as `value`!
145+
});
146+
147+
// ---cut---
148+
const cache = perlin3d.staticCache({ root, size: d.vec3u(10, 10, 1) });
149+
150+
const pipeline = root['~unstable']
151+
// Plugging the cache into the pipeline
152+
.pipe(cache.inject())
153+
// ...
154+
.withCompute(main)
155+
.createPipeline();
156+
```
157+
Or in WebGPU:
158+
```ts twoslash
159+
/// <reference types="@webgpu/types" />
160+
import * as d from 'typegpu/data';
161+
162+
declare const device: GPUDevice;
163+
import tgpu from 'typegpu';
164+
import { perlin3d } from '@typegpu/noise';
165+
166+
// ---cut---
167+
const root = tgpu.initFromDevice({ device });
168+
const cache = perlin3d.staticCache({ root, size: d.vec3u(10, 10, 1) });
169+
170+
const { code, usedBindGroupLayouts, catchall } = tgpu.resolveWithContext({
171+
template: `
172+
fn main() {
173+
let value = perlin3d.sample(vec3f(0.5, 0., 0.));
174+
let wrappedValue = perlin3d.sample(vec3f(10.5, 0., 0.)); // the same as 'value'!
175+
// ...
176+
}
177+
178+
// ...
179+
`,
180+
externals: { perlin3d },
181+
config: (cfg) =>
182+
cfg.pipe(cache.inject())
183+
// Or just:
184+
// config: cache.inject()
185+
});
186+
```
187+
188+
### Dynamic cache
189+
190+
If you need to change the size of the noise domain at runtime (in between shader invocations)
191+
without having to recompile the shader, you have to use a dynamic cache. With it comes a more
192+
complex setup.
193+
194+
```ts twoslash
195+
import tgpu from 'typegpu';
196+
import * as d from 'typegpu/data';
197+
import { perlin3d } from '@typegpu/noise';
198+
199+
const main = tgpu['~unstable'].computeFn({ workgroupSize: [1] })(() => {
200+
const value = perlin3d.sample(d.vec3f(0.5, 0, 0));
201+
const wrappedValue = perlin3d.sample(d.vec3f(10.5, 0, 0)); // the same as `value`!
202+
});
203+
204+
const root = await tgpu.init();
205+
// ---cut---
206+
const cacheConfig = perlin3d.dynamicCacheConfig();
207+
// Holds all resources the perlin cache needs access to
208+
const dynamicLayout = tgpu.bindGroupLayout({ ...cacheConfig.layout });
209+
210+
const pipeline = root['~unstable']
211+
// Plugging the cache into the pipeline
212+
.pipe(cacheConfig.inject(dynamicLayout.$))
213+
// ...
214+
.withCompute(main)
215+
.createPipeline();
216+
217+
// Instantiating the cache with an initial size.
218+
const cache = cacheConfig.instance(root, d.vec3u(10, 10, 1));
219+
220+
// A function for updating the size of the cache
221+
function initBindGroup(size: d.v3u) {
222+
cache.size = size;
223+
return root.createBindGroup(dynamicLayout, cache.bindings);
224+
}
225+
let bindGroup = initBindGroup(d.vec3u(10, 10, 1));
226+
227+
// Dispatching the pipeline
228+
pipeline
229+
.with(dynamicLayout, bindGroup)
230+
.dispatchWorkgroups(1);
231+
232+
// Can be called again to reinitialize the cache with
233+
// a different domain size
234+
bindGroup = initBindGroup(d.vec3u(5, 5, 1));
235+
```
Lines changed: 10 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,10 @@
1+
<div>
2+
There is a known WebGPU issue that may cause structs containing three-element
3+
vectors to be copied incorrectly. This example automatically detects whether
4+
your device is affected by this issue.
5+
</div>
6+
7+
<div class="result">
8+
The test did not finish running. If you keep seeing this even after a refresh,
9+
your device/browser may not support WebGPU.
10+
</div>
Lines changed: 104 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,104 @@
1+
// irrelevant import so the file becomes a module
2+
import tgpu from 'typegpu';
3+
const t = tgpu;
4+
5+
// setup
6+
const adapter = await navigator.gpu?.requestAdapter();
7+
const device = await adapter?.requestDevice();
8+
if (!device) {
9+
throw new Error('WebGPU is not supported!');
10+
}
11+
const copyModule = device.createShaderModule({
12+
label: 'copying compute module',
13+
code: `
14+
struct Item {
15+
vec: vec3u,
16+
num: u32,
17+
}
18+
19+
@group(0) @binding(0) var<storage, read> sourceBuffer: Item;
20+
@group(0) @binding(1) var<storage, read_write> targetBuffer: Item;
21+
22+
@compute @workgroup_size(1) fn computeShader_0(@builtin(global_invocation_id) gid: vec3u){
23+
var item = sourceBuffer;
24+
targetBuffer = item;
25+
}
26+
`,
27+
});
28+
29+
const pipeline = device.createComputePipeline({
30+
label: 'copying compute pipeline',
31+
layout: 'auto',
32+
compute: {
33+
module: copyModule,
34+
},
35+
});
36+
37+
// work buffer 1
38+
const sourceBuffer = device.createBuffer({
39+
label: 'source buffer',
40+
size: 16,
41+
usage: GPUBufferUsage.STORAGE | GPUBufferUsage.COPY_SRC |
42+
GPUBufferUsage.COPY_DST,
43+
});
44+
45+
// work buffer 2
46+
const targetBuffer = device.createBuffer({
47+
label: 'target buffer',
48+
size: 16,
49+
usage: GPUBufferUsage.STORAGE | GPUBufferUsage.COPY_SRC |
50+
GPUBufferUsage.COPY_DST,
51+
});
52+
53+
// buffer for reading the results
54+
const resultBuffer = device.createBuffer({
55+
label: 'result buffer',
56+
size: 16,
57+
usage: GPUBufferUsage.MAP_READ | GPUBufferUsage.COPY_DST,
58+
});
59+
60+
const bindGroup = device.createBindGroup({
61+
label: 'bind group for work buffers',
62+
layout: pipeline.getBindGroupLayout(0),
63+
entries: [
64+
{ binding: 0, resource: { buffer: sourceBuffer } },
65+
{ binding: 1, resource: { buffer: targetBuffer } },
66+
],
67+
});
68+
69+
// input copying and compute pass
70+
71+
const input = new DataView(new ArrayBuffer(16));
72+
input.setUint32(0, 1);
73+
input.setUint32(4, 3);
74+
input.setUint32(8, 5);
75+
input.setUint32(12, 7);
76+
device.queue.writeBuffer(sourceBuffer, 0, input);
77+
78+
const encoder = device.createCommandEncoder({
79+
label: 'copying encoder',
80+
});
81+
const pass = encoder.beginComputePass({
82+
label: 'copying compute pass',
83+
});
84+
pass.setPipeline(pipeline);
85+
pass.setBindGroup(0, bindGroup);
86+
pass.dispatchWorkgroups(1);
87+
pass.end();
88+
89+
encoder.copyBufferToBuffer(targetBuffer, 0, resultBuffer, 0, 16);
90+
device.queue.submit([encoder.finish()]);
91+
92+
await resultBuffer.mapAsync(GPUMapMode.READ);
93+
const result = new DataView(resultBuffer.getMappedRange().slice());
94+
resultBuffer.unmap();
95+
96+
const table = document.querySelector<HTMLDivElement>('.result');
97+
if (!table) {
98+
throw new Error('Nowhere to display the results');
99+
}
100+
101+
console.log(input, result);
102+
table.innerText = (input.getUint32(12) === result.getUint32(12))
103+
? 'The bug DOES NOT occur on this device.'
104+
: 'The bug DOES occur on this device.';
Lines changed: 5 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,5 @@
1+
{
2+
"title": "Struct Copying Test",
3+
"category": "tests",
4+
"tags": ["experimental"]
5+
}

packages/typegpu-noise/src/perlin-2d/dynamic-cache.ts

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -72,7 +72,7 @@ const DefaultPerlin2DLayoutPrefix = 'perlin2dCache__' as const;
7272
*
7373
* @param options A set of general options for instances of this cache configuration.
7474
*
75-
* ### Basic usage
75+
* --- Basic usage
7676
* @example
7777
* ```ts
7878
* const cacheConfig = perlin2d.dynamicCacheConfig();

0 commit comments

Comments
 (0)