Physically based unidirectional (backwards) Monte Carlo path tracer written with the HIPRT and Orochi libraries.
HIPRT is AMD's equivalent to OptiX. It allows the use of the ray tracing accelerators of RDNA2+ AMD GPUs and can run on NVIDIA devices as well (although it wouldn't take advatange of RT cores) as it is not AMD specific.
The Orochi library allows the loading of HIP and CUDA libraries at runtime meaning that the application doesn't have to be recompiled to be used on a GPU from a different vendor (unlike HIP alone which, despite being compatible with NVIDIA and AMD hardware, would require a recompilation).
- AMD RDNA1 GPU or newer (RX 5000 or newer) or NVIDIA Maxwell GPU or newer (GTX 700 & GTX 900 Series or newer)
- Visual Studio 2022 (only version tested but older versions might work as well) on Windows
- CMake
- CUDA for NVIDIA compilation
- Coat Microfacet GGX Layer + Anisotropy, Anisotropy Rotation, Medium Absorption & Thickness
- On-the-fly Monte Carlo integration for energy compensation of interlayer clearcoat multiple scattering
- SGGX Volumetric Sheen Lobe LTC Fit [Zeltner, Burley, Chiang, 2022]
- Specular Microfacet GGX Layer
- Diffuse BRDF lobe. Support for:
- Lambertian
- Oren-nayar
- Metallic Microfacet GGX Layer + Anisotropy & Anisotropy Rotation + Double Roughness [Kulla & Conty, 2017]
- Specular transmission BTDF + Beer Lambert Volumetric Absorption [Burley, 2015]
- Diffuse lambertian BTDF
- Spectral dispersion using Cauchy's equation
- Multiple-scattering energy compensation for conductors (double metal layer), dielectrics (transmission layer) and glossy-diffuse (specular + diffuse layer) materials [Turquin, 2019]
- Thin-film interference over dielectrics and conductors [Belcour, Barla, 2017]
- Thin-walled model
-
Base light sampling techniques:
- Uniform light sampling for direct lighting estimation + MIS
- Power-proportional light sampling
- ReGIR [Boksansky et al., 2021] augmented with:
- Representative cell surface-data + integration with NEE++ for resampling according to the product BRDF * L_i * G * V
- Visibility reuse
- Spatial reuse
- Hash grid
- Per-cell RIS integral normalization factor pre-integration for multiple importance sampling support
-
Next-event estimation strategies (built on-top of base techniques):
- MIS with BSDF sampling
- Resampled Importance Sampling (RIS) [Talbot et al., 2005]+ Weighted Reservoir Sampling (WRS) for many light sampling + [M. T. Chao, 1982]
- ReSTIR DI
- Next Event Estimation++ [Guo et al., 2020] + Custom envmap support
- HDR Environment map + Multiple Importance Sampling using
- CDF-inversion & binary search
- Alias Table (Vose's O(N) construction [Vose, 1991])
-
BSDF sampling:
- GGX NDF Sampling:
- Visible Normal Distribution Function (VNDF) [Heitz, 2018]
- Spherical caps VNDF Sampling [Dupuy, Benyoub, 2023]
- GGX NDF Sampling:
-
Path sampling:
- BSDF Sampling:
- One sample MIS for lobe sampling [Hery et al., 2017]
- ReSTIR GI [Ouyang et al., 2021]
- Experimental warp-wide direction reuse for improved indirect rays coherency [Liu et al., 2023]
- BSDF Sampling:
-
ReSTIR Samplers:
- ReSTIR DI [Bitterli et al., 2020]
- Supports envmap sampling
- Fused Spatiotemporal Reuse [Wyman, Panteleev, 2021]
- Light Presampling [Wyman, Panteleev, 2021]
- ReSTIR GI [Ouyang et al., 2021]
- Many bias correction weighting schemes:
- 1/M
- 1/Z
- MIS-like,
- Generalized balance heuristic
- Pairwise MIS [Bitterli, 2022] & defensive formulation [Lin et al., 2022])
- Pairwise symmetric & asymmetric ratio MIS weights [Pan et al., 2024]
- Adaptive-directional spatial reuse for improved offline rendering efficiency
- Optimal visibility sampling [Pan et al., 2024]
- ReSTIR DI [Bitterli et al., 2020]
- Microfacet Model Regularization for Robust Light Transport [Jendersie et al., 2019]
- G-MoN - Adaptive median of means for unbiased firefly removal [Buisine et al., 2021]
- Texture support for all the parameters of the BSDF
- Texture alpha transparency support
- Stochastic material opacity support
- Normal mapping
- Nested dielectrics support
- Handling with priorities as proposed in [Simple Nested Dielectrics in Ray Traced Images, Schmidt, 2002]
- A Low-Distortion Map Between Triangle and Square [Heitz, 2019]
- Per-pixel variance based adaptive sampling
- Intel Open Image Denoise + Normals & Albedo AOV support
- Interactive ImGui interface
- Asynchronous interface to guarantee smooth UI interactions even with heavy path tracing kernels
- Interactive first-person camera
- Different frame-buffer visualization (visualize the adaptive sampling heatmap, converged pixels, the denoiser normals / albedo, ...)
- Use of the [ASSIMP] library to support many scene file formats.
- Multithreaded scene parsing/texture loading/shader compiling/BVH building/envmap processing/... for faster application startup times
- Background-asynchronous path tracing kernels pre-compilation
- Shader cache to avoid recompiling kernels unnecessarily
Nothing to do, go to the "Compiling" step.
To build the project on NVIDIA hardware, you will need to install the NVIDIA CUDA SDK v12.2. It can be downloaded and installed from here.
The CMake build then expects the CUDA_PATH
environment variable to be defined. This should automatically be the case after installing the CUDA Toolkit but just in case, you can define it yourself such that CUDA_PATH/include/cuda.h
is a valid file path.
- Install OpenGL, GLFW and glew dependencies:
sudo apt install freeglut3-dev
sudo apt install libglfw3-dev
sudo apt install libglew-dev
- Install AMD HIP (if you already have ROCm installed, you should have a
/opt/rocm
folder on your system and you can skip this step):
Download amdgpu-install
package: https://rocm.docs.amd.com/projects/install-on-linux/en/latest/install/amdgpu-install.html
Install the package:
sudo apt install ./amdgpu-install_xxxx.deb
Install HIP:
sudo amdgpu-install --usecase=hip
- Normally, you would have to run the path tracer as
sudo
to be able to acces GPGPU compute capabilities. However, you can save yourself the trouble by adding the user to therender
group and rebooting your system :
sudo usermod -a -G render $LOGNAME
- Install OpenGL, GLFW and glew dependencies:
sudo apt install freeglut3-dev
sudo apt install libglfw3-dev
sudo apt install libglew-dev
sudo apt install libomp-dev
- Install the NVIDIA CUDA SDK (called "CUDA Toolkit"). It can be downloaded and installed from here.
With the pre-requisites fulfilled, you now just have to run the CMake:
git clone https://github.com/TomClabault/HIPRT-Path-Tracer.git --recursive
cd HIPRT-Path-Tracer
mkdir build
cd build
cmake -DCMAKE_BUILD_TYPE=Debug ..
On Windows, a Visual Studio solution will be generated in the build
folder that you can open and compile the project with (select HIPRTPathTracer
as startup project).
On Linux, the HIPRTPathTracer
executable will be generated in the build
folder.
./HIPRT-Path-Tracer
The following arguments are available:
<scene file path>
an argument of the commandline without prefix will be considered as the scene file. File formats supported.--sky=<path>
for the equirectangular skysphere used during rendering (HDR or not)--samples=N
for the number of samples to trace*--bounces=N
for the maximum number of bounces in the scene*--w=N
/--width=N
for the width of the rendering*--h=N
/--height=N
for the height of the rendering*
* CPU only commandline arguments. These parameters are controlled through the UI when running on the GPU.
Sources of the scenes can be found here.
GNU General Public License v3.0 or later
See COPYING to see the full text.