We address MCMC practicability issues, opening up path space MCMC as a more general adaptive framework. To counter non-uniform image quality we derive an analytic target function for image-space sample stratification. It is based on a novel connection between variance and path differentials, allowing analytic variance estimates for MC samples (with potential uses in adaptive algorithms outside MCMC). We also apply our theoretical framework to optimize an adaptive MCMC algorithm that only uses forward path construction, in contrast to many previous MCMC techniques that rely on bi-directional path tracing. Notably, we adapt a full-featured path tracer (with minimal changes) into a single-path state space Markov Chain, bridging another gap between MCMC and MC.
ACM Transactions on Graphics, 39, 6, Article 246 (to appear)
Bright samples with low sampling probability, often called fireflies, occur in all practical Monte Carlo renderers and are part of computing unbiased estimates. For finite-sample estimates, however, they can lead to excessive variance. Rejecting all such samples as outliers, as suggested in previous work, leads to estimates that are overly biased and can cause undesirable artifacts. In this paper, we show how samples can be reweighted depending on their contribution and sampling frequency, such that the finite-sample estimate can in fact get closer to the correct expected value, and the overall image noise (variance) can be controlled.
Without special handling, rendering emissive media is challenging: in thin regions where only few scattering events occur, emission is poorly sampled. Importance sampling by emission can be disadvantageous, too, neglecting absorption in dense regions.
In order to be able to use all emission events we encounter along line segments inside volumes, we extend the standard path space measurement contribution such that it allows collecting all emission along randomly sampled path segments, rather than just at path vertices, while retaining unbiasedness.
Joint work with Florian Simon, Johannes Hanika, and Carsten Dachsbacher Computer Graphics Forum (Proc. of EGSR 2017)
Inspired by vector field topology, an established tool for the extraction and identification of important features of flows and vector fields, we develop means for the analysis of the structure of light transport. We derive an analogy to vector field topology that defines coherent structures in light transport. We introduce Finite-Time Path Deflection (FTPD), a scalar quantity that represents the deflection characteristic of all light transport paths passing through a given point in space. For virtual scenes, the FTPD can be computed directly using path-space Monte Carlo integration. We show that the coherent regions visualized by the FTPD are closely related to the coherent regions in our new topologically-motivated analysis of light transport. FTPD visualizations are thus also visualizations of the structure of light transport.
Joint work with Carsten Dachsbacher Computer Graphics Forum (Proc. of EuroVis 2015)
Visualization can benefit from advanced rendering techniques also used for photo-realistic image synthesis. However, high contrast, shadows, and occlusion can limit the usefulness of such visualizations. We present a method to optimize the attenuation of light for visualization purposes.
By an importance function, more light can be transmitted to and from the features of interest, improving visibility of important features, while contextual structures still cast shadows giving cues for the perception of depth.
Our work is inspired by previous visibility optimization work for surfaces, but significantly improves the efficiency of the optimization, which is crucial for making it scale to volumetric data sets: By converting the smoothing terms used in previous work into a separable pre-filtering step on the input data, we manage to find a closed-form solution for the optimal extinction terms along a view or shadow ray, thus achieving interactive performance.
We present a GPU-friendly real-time voxelization technique for rendering homogeneous media that is defined by particles, e.g. fluids obtained from particle-based simulations such as Smoothed Particle Hydrodynamics (SPH). Our method computes view-adaptive binary voxelizations with on-the-fly compression of a tiled perspective voxel grid, achieving higher resolutions than previous approaches. It allows interactive rendering with complex effects such as ray casting-based refraction and reflection, light scattering and absorption, and ambient occlusion. In contrast to previous methods, it does not rely on expensive preprocessing.
We present a stable shading method and a procedural shading model that enables real-time rendering of sub-pixel glints and anisotropic microdetails resulting from irregular microscopic surface structure, in order to simulate a rich spectrum of appearances ranging from sparkling to brushed materials. We introduce a biscale Normal Distribution Function (NDF) for microdetails to provide a convenient artistic control over both the global appearance as well as over the appearance of the individual microdetail shapes, while efficiently generating procedural details.
Displacement mapping a textured surface introduces distortions of the displaced surface's texture. Our approach corrects this by counter-distorting the other texture maps according to the displacement map. We describe a fast and simple, fully GPU-based two-step procedure to resolve this problem. First, a correction deformation is computed from the displacement map. Second, we apply the correction deformation to the texture coordinates used for surface texture lookups, counteracting the uneven distortion due to displacement mapping.
Joint work with Tobias Ritschel Computer Graphics Forum (Proc. of HPG 2019)
Ray tracing poses specific challenges on parallel architectures such as GPUs: Each ray may hit different objects with differing materials and textures, requiring potentially divergent shading calculations and random access to larger parts of the scene description. In contrast, traditional GPU rasterization pipelines require only linear scene access and trivially support fully dynamic scene geometry. In this work, we explored an object-order method for tracing incoherent secondary rays that has similar properties and flexibility, implemented in the standard graphics pipeline. Thus, the ability to generate, transform and animate geometry via shaders is fully retained. Our method does not distinguish between static and dynamic geometry.