You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: blog/2025-05-07-nanite-at-home/index.md
+14-13Lines changed: 14 additions & 13 deletions
Original file line number
Diff line number
Diff line change
@@ -17,31 +17,32 @@ TODO Nanite at home meme?
17
17
18
18
19
19
20
-
## Traditional Level of Detail
20
+
## Triangles, Vertices, Meshes and Level of Detail
21
21
22
-
TODO pic of some low poly 3D model
22
+
Feel free to skip this chapter if you already know all these concepts, but I suspect there will be plenty of rust programmers unfamiliar with computer graphics.
23
23
24
-
3D meshes for real-time rendering are made up of many triangles and their corners, called vertices.
25
-
At a minimum a vertex must define its position but can have various other properties, with their memory layout being fully dynamic.
26
-
Triangles on the other hand are simply a list of u16 or u32 indices pointing into the List of vertices, where a set of 3 form a single triangle.
24
+
TODO pic of some low poly 3D model
27
25
28
-
TODO quad example?
26
+
You've probably all seen a 3D mesh made up of many triangles, but how do we actually represent them? The triangles themselves are quite simple, it's just a List of u16 or u32 indices where a set of 3 describe the corner IDs to connect to create a triangle. Far more interesting are these corners, we call them vertices, as they often cary not just the position they are located at, but you can attach various other attributes to them. For example, normals can describe the "up" direction of a surface, which is important in lighting calculations. Or texture coordinates to describe how 2D images should be wrapped around our 3D mesh, think of the wrapper around chocolate Easter bunnies.
29
27
30
-
The process of rasterising our model into colorful pixels on screen consists out of a 3 step process:
31
-
* The vertex shader calculates the position of our vertex from the camera's perspective and do other kinds of per-vertex transformations.
32
-
* Then our triangle is rasterized into the individual pixel locations.
33
-
* The fragment shader evaluates the color of every pixel.
28
+
As we're writing a renderer, we don't just want to store models, we want to render them to the screen and look at it from different directions!
29
+
For realtime applications you'd typically use a process called "rasterization" to turn a model into colorful pixels on screen, which includes programmable shaders allowing us to manipulate the appearance of our model:
30
+
1. The vertex shader runs once per vertex and calculates where our vertex would end up on the screen, but can also change the other attributes attached to the vertex.
31
+
2. A hardware rasterizer assembles the triangles and figures out where they end up on the screen. It then creates a stream of fragments for each pixel the triangle overlaps.
32
+
3. The fragment shader evaluates the color of every emitted fragment.
34
33
35
34
The cost of rendering scales largely by the shaders that need to be run, or in other words: The amount of pixels on screen plus the amount of vertices of the model.
36
35
37
-
As we move a model further away from the camera, the mesh gets smaller and fewer fragment shader need to be evaluated. However, we would still need to call the vertex shader for every single vertex to know where it ends up on screen, even if the detail they describe would be too small to notice. To improve performance, it is common practice to not just have a single mesh, but to create multiple meshes at different detail levels that can be swapped out, usually depending on the distance to the camera.
38
-
39
-
The reducing vertices in a model can in part be automated and is called "mesh simplification". Level of Detail (LOD)
36
+
As we move a model further away from the camera, the mesh gets smaller and fewer fragment shader need to be evaluated. However, we would still need to call the vertex shader for every single vertex to know where it ends up on screen, even if the detail they describe would be too small to notice. To improve performance, it is common practice to not just have a single mesh, but to create multiple meshes at different Level of Detail (LOD) that can be swapped out, depending on the distance to the camera.
40
37
41
38
## Terrain Generation
42
39
40
+
41
+
43
42
## Nanite
44
43
44
+
The reducing vertices in a model can be automated and is called "mesh simplification".
0 commit comments