CSharpGL(54)用基於圖像的光照(IBL)來計算PBR的Specular部分
接下來本系列將通過翻譯(https://learnopengl.com)這個網站上關於PBR的內容來學習PBR(Physically Based Rendering)。
本文對應(https://learnopengl.com/PBR/IBL/Specular-IBL)。
原文雖然寫得挺好,但是仍舊不夠人性化。過一陣我自己總結總結PBR,寫一篇更容易理解的。
正文
In the previous tutorial we've set up PBR in combination with image based lighting by pre-computing an irradiance map as the lighting's indirect diffuse portion. In this tutorial we'll focus on the specular part of the reflectance equation:
在上一篇教程中我們已經用IBL(基於圖像的光照)來解決了PBR的一個問題:用預計算的輻照度貼圖作為非直接光照的diffuse部分。在本教程中我們將關注反射率方程中的speclar部分。
You'll notice that the Cook-Torrance specular portion (multiplied by kS) isn't constant over the integral and is dependent on the incoming light direction, but also the incoming view direction. Trying to solve the integral for all incoming light directions including all possible view directions is a combinatorial overload and way too expensive to calculate on a real-time basis. Epic Games proposed a solution where they were able to pre-convolute the specular part for real time purposes, given a few compromises, known as the split sum approximation.
你會注意到Cook-Torrance的specular部分(乘以kS的那個)在積分上不是常量,它既依賴入射光方向,又依賴觀察者方向。對所有入射光反向和所有觀察者方向的乘積的積分進行求解,是過載又過載,對於實時計算是太過昂貴了。Epic游戲公司提出了一個解決方案(被稱為拆分求和近似),用一點妥協,通過預計算卷積實現了specular部分的實時計算。
The split sum approximation splits the specular part of the reflectance equation into two separate parts that we can individually convolute and later combine in the PBR shader for specular indirect image based lighting. Similar to how we pre-convoluted the irradiance map, the split sum approximation requires an HDR environment map as its convolution input. To understand the split sum approximation we'll again look at the reflectance equation, but this time only focus on the specular part (we've extracted the diffuse part in the previous tutorial):
拆分求和近似方案,將反射率方程的specular部分拆分為兩個單獨的部分,我們可以分別對齊進行卷積,之后再結合起來。在shader中可以實現這樣的specular的IBL。與我們預計算輻照度貼圖相似,拆分求和近似方案需要一個HDR環境貼圖作為輸入。為了理解這個方案,我們再看一下反射率方程,但這次只關注specular部分(我們已經在上一篇教程中分離出了diffuse部分):
For the same (performance) reasons as the irradiance convolution, we can't solve the specular part of the integral in real time and expect a reasonable performance. So preferably we'd pre-compute this integral to get something like a specular IBL map, sample this map with the fragment's normal and be done with it. However, this is where it gets a bit tricky. We were able to pre-compute the irradiance map as the integral only depended on ωi and we could move the constant diffuse albedo terms out of the integral. This time, the integral depends on more than just ωi as evident from the BRDF:
由於和輻照度卷積相同的原因(性能),我們不能實時求解specular部分的積分,還期待一個可接受的性能。所以我們傾向於預計算這個積分,得到某種specular的IBL貼圖,用片段的法線在貼圖上采樣,得到所需結果。但是,這里比較困難。我們能預計算輻照度貼圖,是因為它的積分只依賴ωi,而且我們還能吧diffuse顏色項移到積分外面。這次,從BRDF公式上可見,積分不止依賴一個ωi。
This time the integral also depends on wo and we can't really sample a pre-computed cubemap with two direction vectors. The position p is irrelevant here as described in the previous tutorial. Pre-computing this integral for every possible combination of ωi and ωo isn't practical in a real-time setting.
這次,積分還依賴於wo,我們無法對有2個方向向量的預計算cubemap貼圖進行采樣。這里的位置p是無關的,我們在上一篇教程中講過。在實時系統中預計算這個積分的ωi和ωo的每種組合,是不實際的。
Epic Games' split sum approximation solves the issue by splitting the pre-computation into 2 individual parts that we can later combine to get the resulting pre-computed result we're after. The split sum approximation splits the specular integral into two separate integrals:
Epic游戲公司的拆分求和近似方案解決了這個問題:把預計算拆分為2個互相獨立的部分,且之后可以聯合起來得到我們需要的預計算結果。拆分求和近似方案將specular積分拆分為2個獨立的積分:
The first part (when convoluted) is known as the pre-filtered environment map which is (similar to the irradiance map) a pre-computed environment convolution map, but this time taking roughness into account. For increasing roughness levels, the environment map is convoluted with more scattered sample vectors, creating more blurry reflections. For each roughness level we convolute, we store the sequentially blurrier results in the pre-filtered map's mipmap levels. For instance, a pre-filtered environment map storing the pre-convoluted result of 5 different roughness values in its 5 mipmap levels looks as follows:
卷積時的第一部分被稱為pre-filter環境貼圖(類似輻照度貼圖),是個預計算的環境卷積貼圖,但這次它考慮了粗糙度。對於增長的粗糙度level,環境貼圖用更散射的采樣向量進行卷積,這造成了更模糊的反射。對我們卷積的每個粗糙度level,我們依次在pre-filter貼圖的mipmap層上保存它。例如,一個保存着預計算結果的5個不同粗糙度(用6個mipmap層)的pre-fitler環境貼圖如下圖所示:
We generate the sample vectors and their scattering strength using the normal distribution function (NDF) of the Cook-Torrance BRDF that takes as input both a normal and view direction. As we don't know beforehand the view direction when convoluting the environment map, Epic Games makes a further approximation by assuming the view direction (and thus the specular reflection direction) is always equal to the output sample direction ωo. This translates itself to the following code:
Cook-Torrance函數以法線和觀察者方向為輸入,我們用這個函數生成采樣的方向向量和散射強度。卷積環境貼圖時,我們無法提前預知觀察者方向,Epic游戲公司又做了個近似,假設觀察者方向(即specular反射方向)總是等於輸出采樣方向ωo。相應的代碼如下:
1 vec3 N = normalize(w_o); 2 vec3 R = N; 3 vec3 V = R;
This way the pre-filtered environment convolution doesn't need to be aware of the view direction. This does mean we don't get nice grazing specular reflections when looking at specular surface reflections from an angle as seen in the image below (courtesy of the Moving Frostbite to PBR article); this is however generally considered a decent compromise:
這樣,pre-filter環境貼圖就不需要知道觀察者方向。這意味着,在從下圖(感謝Moving Frostbite to PBR 文章)所示的角度觀察光滑表面反射時,我們得不到比較好的掠角反射結果。但是這一般被認為是相當好的折衷方案了。
The second part of the equation equals the BRDF part of the specular integral. If we pretend the incoming radiance is completely white for every direction (thus L(p,x)=1.0) we can pre-calculate the BRDF's response given an input roughness and an input angle between the normal n and light direction ωi, or n⋅ωi. Epic Games stores the pre-computed BRDF's response to each normal and light direction combination on varying roughness values in a 2D lookup texture (LUT) known as the BRDF integration map. The 2D lookup texture outputs a scale (red) and a bias value (green) to the surface's Fresnel response giving us the second part of the split specular integral:
方程的第二部分是specular積分的BRDF部分。如果我們假設入射光在所有方向上都是白色(即L(p,x)=1.0),我們就能預計算BRDF對給定粗糙度和入射角(法線n和入射方向ωi的夾角,或n⋅ωi)的返回值。Epic游戲公司保存了預計算的BRDF對每個法線+入射方向組合的返回值,保存到一個二維查詢紋理(LUT)上,即BRDF積分貼圖。這個二維查詢紋理輸出的是表面的菲涅耳效應的一個縮放和偏移值,也就是拆分開的specular積分的第二部分。
(我不懂這是啥)
We generate the lookup texture by treating the horizontal texture coordinate (ranged between 0.0
and 1.0
) of a plane as the BRDF's input n⋅ωi and its vertical texture coordinate as the input roughness value. With this BRDF integration map and the pre-filtered environment map we can combine both to get the result of the specular integral:
我們將紋理坐標的U(從0.0到1.0)作為BRDF的n⋅ωi參數,將V坐標視為粗糙度參數,以此生成查詢紋理。現在我們就可以聯合BRDF積分貼圖和pre-filter環境貼圖來得到specular的積分結果:
1 float lod = getMipLevelFromRoughness(roughness); 2 vec3 prefilteredColor = textureCubeLod(PrefilteredEnvMap, refVec, lod); 3 vec2 envBRDF = texture2D(BRDFIntegrationMap, vec2(NdotV, roughness)).xy; 4 vec3 indirectSpecular = prefilteredColor * (F * envBRDF.x + envBRDF.y)
This should give you a bit of an overview on how Epic Games' split sum approximation roughly approaches the indirect specular part of the reflectance equation. Let's now try and build the pre-convoluted parts ourselves.
這些應該讓你大體上理解,Epic游戲公司的拆分求和近似方案是如何計算反射率方程的非直接specular部分的。現在我們試試自己構建預卷積部分。
預卷積HDR環境貼圖
Pre-filtering an environment map is quite similar to how we convoluted an irradiance map. The difference being that we now account for roughness and store sequentially rougher reflections in the pre-filtered map's mip levels.
預卷積一個環境貼圖,與我們卷積輻照度貼圖類似。不同點在於,我們現在要考慮粗糙度,並且將越來越粗糙的反射情況依次保存到貼圖的各個mipmap層里。
First, we need to generate a new cubemap to hold the pre-filtered environment map data. To make sure we allocate enough memory for its mip levels we call glGenerateMipmap as an easy way to allocate the required amount of memory.
首先,我們需要生成一個cubemap對象,用於保存pre-filter的環境貼圖數據。為保證我們為它的mipmap層分配了足夠的內存,我們使用glGenerateMipmap這一簡便方式來分配需要的內存。
1 unsigned int prefilterMap; 2 glGenTextures(1, &prefilterMap); 3 glBindTexture(GL_TEXTURE_CUBE_MAP, prefilterMap); 4 for (unsigned int i = 0; i < 6; ++i) 5 { 6 glTexImage2D(GL_TEXTURE_CUBE_MAP_POSITIVE_X + i, 0, GL_RGB16F, 128, 128, 0, GL_RGB, GL_FLOAT, nullptr); 7 } 8 glTexParameteri(GL_TEXTURE_CUBE_MAP, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE); 9 glTexParameteri(GL_TEXTURE_CUBE_MAP, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE); 10 glTexParameteri(GL_TEXTURE_CUBE_MAP, GL_TEXTURE_WRAP_R, GL_CLAMP_TO_EDGE); 11 glTexParameteri(GL_TEXTURE_CUBE_MAP, GL_TEXTURE_MIN_FILTER, GL_LINEAR_MIPMAP_LINEAR); 12 glTexParameteri(GL_TEXTURE_CUBE_MAP, GL_TEXTURE_MAG_FILTER, GL_LINEAR); 13 14 glGenerateMipmap(GL_TEXTURE_CUBE_MAP);
Note that because we plan to sample the prefilterMap its mipmaps you'll need to make sure its minification filter is set to GL_LINEAR_MIPMAP_LINEAR to enable trilinear filtering. We store the pre-filtered specular reflections in a per-face resolution of 128 by 128 at its base mip level. This is likely to be enough for most reflections, but if you have a large number of smooth materials (think of car reflections) you may want to increase the resolution.
注意,因為我們計划對prefilterMap 的各層mipmap采樣,你需要確保它的最小過濾參數設置為GL_LINEAR_MIPMAP_LINEAR ,這樣才能啟用三線過濾。以128x128像素為mipmap基層(第一層),我們將pre-filter的反射情況保存到cubemap的各個面上。這對大多數反射情況都足夠用,但是如果你有大量光滑材質(例如汽車的反射),你可能需要增加分辨率。
In the previous tutorial we convoluted the environment map by generating sample vectors uniformly spread over the hemisphere Ω using spherical coordinates. While this works just fine for irradiance, for specular reflections it's less efficient. When it comes to specular reflections, based on the roughness of a surface, the light reflects closely or roughly around a reflection vector r over a normal n, but (unless the surface is extremely rough) around the reflection vector nonetheless:
在之前的教程中,我們用球坐標均勻地在半球上進行采樣,實現了對環境貼圖的卷積。雖然這對輻照度貼圖很好用,但是對specular反射就不夠高效。談到specular反射時,根據表面粗糙度的不同,the light reflects closely or roughly around a reflection vector r over a normal n, but (unless the surface is extremely rough) around the reflection vector nonetheless:
The general shape of possible outgoing light reflections is known as the specular lobe. As roughness increases, the specular lobe's size increases; and the shape of the specular lobe changes on varying incoming light directions. The shape of the specular lobe is thus highly dependent on the material.
光反射的可能形狀被稱為specular葉。隨着粗糙度增加,specular葉的尺寸也增加,它的形狀隨入射光方向而變化。因此它的形狀高度依賴材質。
When it comes to the microsurface model, we can imagine the specular lobe as the reflection orientation about the microfacet halfway vectors given some incoming light direction. Seeing as most light rays end up in a specular lobe reflected around the microfacet halfway vectors it makes sense to generate the sample vectors in a similar fashion as most would otherwise be wasted. This process is known as importance sampling.
對於微平面模型,我們可以將specular葉想象為,對給定的入射光方向,微平面半向量的朝向。鑒於大多數光線都在specular葉范圍內,就可以合理地用這種方式生成采樣向量(反正其他被浪費的都是少數)。這個過程被稱為重要性采樣
蒙特卡羅積分和重要性采樣
To fully get a grasp of importance sampling it's relevant we first delve into the mathematical construct known as Monte Carlo integration. Monte Carlo integration revolves mostly around a combination of statistics and probability theory. Monte Carlo helps us in discretely solving the problem of figuring out some statistic or value of a population without having to take all of the population into consideration.
為真正理解自行車采樣,我們要先研究一些“蒙特卡羅積分”的數學概念。蒙特卡羅積分圍繞統計學和概率理論,幫助我們用離散的方式解決統計或求值問題,且不需要考慮所有的樣本。
For instance, let's say you want to count the average height of all citizens of a country. To get your result, you could measure every citizen and average their height which will give you the exact answer you're looking for. However, since most countries have a considerable population this isn't a realistic approach: it would take too much effort and time.
例如,假設你想統計一個國家的所有國民的平均身高。為得到這個結果,你可以測量每個人,然后得到准確的答案。但是,顯示這不可能實現,它太費時間和精力了。
A different approach is to pick a much smaller completely random (unbiased) subset of this population, measure their height and average the result. This population could be as small as a 100 people. While not as accurate as the exact answer, you'll get an answer that is relatively close to the ground truth. This is known as the law of large numbers. The idea is that if you measure a smaller set of size N of truly random samples from the total population, the result will be relatively close to the true answer and gets closer as the number of samples N increases.
另一個辦法是,挑選一個完全隨機的人口子集,測量他們的身高,求平均值。這個子集可能小到100人。雖然不是准確答案,但是相對來說也很接近。這就是“大數定律”。其思想是,如果你測量完整數據的一個完全隨機的子集(數量為N),那么結果會相對接近真實答案,且隨着N的增加,會更接近。
Monte Carlo integration builds on this law of large numbers and takes the same approach in solving an integral. Rather than solving an integral for all possible (theoretically infinite) sample values x, simply generate N sample values randomly picked from the total population and average. As N increases we're guaranteed to get a result closer to the exact answer of the integral:
蒙特卡羅積分基於這個“大數定律”,用它來解決積分問題。與其計算所有可能的(理論上無限的)采樣值x,不如簡單地隨機生成N個采樣值,然后求平均值。隨着N增加,我們就能保證得到一個越來越接近真實答案的積分結果:
To solve the integral, we take N random samples over the population a to b, add them together and divide by the total number of samples to average them. The pdf stands for the probability density function that tells us the probability a specific sample occurs over the total sample set. For instance, the pdf of the height of a population would look a bit like this:
為求解此積分,我們在a到b之間采集N個隨機值,加起來,求平均值。那個pdf代表概率密度函數,它告訴我們一個特定采樣值在整個樣品集中出現的概率。例如,高度的pdf如下圖所示:
From this graph we can see that if we take any random sample of the population, there is a higher chance of picking a sample of someone of height 1.70, compared to the lower probability of the sample being of height 1.50.
從此圖中我們可以看到,如果我們隨機采集一個樣品,采到1.70的概率比其他(如1.50)的概率更高。
When it comes to Monte Carlo integration, some samples might have a higher probability of being generated than others. This is why for any general Monte Carlo estimation we divide or multiply the sampled value by the sample probability according to a pdf. So far, in each of our cases of estimating an integral, the samples we've generated were uniform, having the exact same chance of being generated. Our estimations so far were unbiased, meaning that given an ever-increasing amount of samples we will eventually converge to the exact solution of the integral.
對於蒙特卡羅積分,有的采樣比其他的出現的概率更高。這就是為什么,對於一般的蒙特卡羅估算,我們都用一個pdf概率除采樣值。目前為止,對於每種估算積分的情形,我們生成的采樣都是均勻的,其生成概率都相同。我們的估算目前沒有偏差,也就是說,給你定一個逐步增加的采樣量,我們會最終收斂到積分的准確值。
However, some Monte Carlo estimators are biased, meaning that the generated samples aren't completely random, but focused towards a specific value or direction. These biased Monte Carlo estimators have a faster rate of convergence meaning they can converge to the exact solution at a much faster rate, but due to their biased nature it's likely they won't ever converge to the exact solution. This is generally an acceptable tradeoff, especially in computer graphics, as the exact solution isn't too important as long as the results are visually acceptable. As we'll soon see with importance sampling (which uses a biased estimator) the generated samples are biased towards specific directions in which case we account for this by multiplying or dividing each sample by its corresponding pdf.
但是,有的蒙特卡羅估算是有偏差的。也就是說,生成的采樣值不是完全隨機的,而是集中於某個值或方向。這些有偏差的蒙特卡羅估算會更快地收斂,但是收斂結果不是嚴格准確的。一般這是可接收的折衷,特別是在計算機圖形領域,因為嚴格准確的結果並不重要,只要視覺上可接受就行了。如我們馬上將看到的,用重要性采樣(使用了一個有片擦汗的估算法)生成的采樣值偏向特定的方向。我們通過乘或除每個采樣值的pdf來算進這個偏差。
Monte Carlo integration is quite prevalent in computer graphics as it's a fairly intuitive way to approximate continuous integrals in a discrete and efficient fashion: take any area/volume to sample over (like the hemisphere Ω), generate N amount of random samples within the area/volume and sum and weigh every sample contribution to the final result.
蒙特卡羅積分在計算機圖形學中相當普遍,因為它相當直觀地高效地用離散數據近似得到連續積分的值:對任何面積\體積進行采樣(例如半球Ω),生成N個隨機采樣值,求和,評估每個人采樣值對最后結果的貢獻。
Monte Carlo integration is an extensive mathematical topic and I won't delve much further into the specifics, but we'll mention that there are also multiple ways of generating the random samples. By default, each sample is completely (pseudo)random as we're used to, but by utilizing certain properties of semi-random sequences we can generate sample vectors that are still random, but have interesting properties. For instance, we can do Monte Carlo integration on something called low-discrepancy sequences which still generate random samples, but each sample is more evenly distributed:
蒙特卡羅積分是個廣闊的數學話題,我不會再深入探討它了。但是我們將提到,生成隨機采樣值的方式是多樣的。默認的,每個采樣值是完全(偽)隨機數,但是通過利用半隨機數的某種性質,我們可以生成有有趣性質的隨機數。例如,我們里對“低差異序列”進行蒙特卡羅積分,這會生成隨機采樣值,但是每個采樣值都更均勻地分布。
When using a low-discrepancy sequence for generating the Monte Carlo sample vectors, the process is known as Quasi-Monte Carlo integration. Quasi-Monte Carlo methods have a faster rate of convergence which makes them interesting for performance heavy applications.
當使用低差異序列生成蒙特卡羅采樣向量時,這個過程就被稱為“准蒙特卡羅積分”。它的收斂速度更快,所以對性能要求嚴格的應用程序很有吸引力。
Given our newly obtained knowledge of Monte Carlo and Quasi-Monte Carlo integration, there is an interesting property we can use for an even faster rate of convergence known as importance sampling. We've mentioned it before in this tutorial, but when it comes to specular reflections of light, the reflected light vectors are constrained in a specular lobe with its size determined by the roughness of the surface. Seeing as any (quasi-)randomly generated sample outside the specular lobe isn't relevant to the specular integral it makes sense to focus the sample generation to within the specular lobe, at the cost of making the Monte Carlo estimator biased.
基於蒙特卡羅和准蒙特卡羅積分的知識,我們可以用一個有趣的性質——重要性采樣來得到更快速的收斂。本教程之前提過它,但是對於光的specular反射,反射光向量被束縛在特定的specular葉內,其尺寸由表面的粗糙度決定。鑒於任何(准)隨機生成的specular葉外部的采樣值都對specular積分沒有影響,那么就有理由僅關注specular葉內部的采樣值生成。其代價是,蒙特卡羅估算有偏差。
This is in essence what importance sampling is about: generate sample vectors in some region constrained by the roughness oriented around the microfacet's halfway vector. By combining Quasi-Monte Carlo sampling with a low-discrepancy sequence and biasing the sample vectors using importance sampling we get a high rate of convergence. Because we reach the solution at a faster rate, we'll need less samples to reach an approximation that is sufficient enough. Because of this, the combination even allows graphics applications to solve the specular integral in real-time, albeit it still significantly slower than pre-computing the results.
重要性采樣的本質是:圍繞微平面的半角向量,在受粗糙度約束的區域內生成采樣向量。通過聯合使用低差異序列的准蒙特卡羅采樣和使用重要性采樣的有偏差的采樣向量,我們得到了很快的收斂速度。因為我們接近答案的速度更快,所以我們需要的采樣量更少。因此,這一聯合方案甚至允許圖形應用程序實時計算specular積分,雖然它還是比預計算要慢得多。
低差異序列
In this tutorial we'll pre-compute the specular portion of the indirect reflectance equation using importance sampling given a random low-discrepancy sequence based on the Quasi-Monte Carlo method. The sequence we'll be using is known as the Hammersley Sequence as carefully described by Holger Dammertz. The Hammersley sequence is based on the Van Der Corpus sequence which mirrors a decimal binary representation around its decimal point.
在本教程中,我們將預計算非直接反射公式中的specular部分,方式是用基於准蒙特卡羅方法的隨機低差異序列的重要性采樣。我們要用的序列被稱為Hammersley序列,它由Holger Dammertz給出了詳盡的描述。Hammersley序列基於Van Der Corpus 序列,它mirrors a decimal binary representation around its decimal point。
Given some neat bit tricks we can quite efficiently generate the Van Der Corpus sequence in a shader program which we'll use to get a Hammersley sequence sample i over N
total samples:
利用一些漂亮的技巧,我們可以用shader程序高效地生成Van Der Corpus序列,然后可以用它得到Hammersley序列(在N個采樣中采集i個?)。
1 float RadicalInverse_VdC(uint bits) 2 { 3 bits = (bits << 16u) | (bits >> 16u); 4 bits = ((bits & 0x55555555u) << 1u) | ((bits & 0xAAAAAAAAu) >> 1u); 5 bits = ((bits & 0x33333333u) << 2u) | ((bits & 0xCCCCCCCCu) >> 2u); 6 bits = ((bits & 0x0F0F0F0Fu) << 4u) | ((bits & 0xF0F0F0F0u) >> 4u); 7 bits = ((bits & 0x00FF00FFu) << 8u) | ((bits & 0xFF00FF00u) >> 8u); 8 return float(bits) * 2.3283064365386963e-10; // / 0x100000000 9 } 10 // ---------------------------------------------------------------------------- 11 vec2 Hammersley(uint i, uint N) 12 { 13 return vec2(float(i)/float(N), RadicalInverse_VdC(i)); 14 }
The GLSL Hammersley function gives us the low-discrepancy sample i of the total sample set of size N.
GLSL代碼的Hammersley 函數,給出數目為N的集合的低差異i次采樣。
不使用位操作的Hammersley 序列
Not all OpenGL related drivers support bit operators (WebGL and OpenGL ES 2.0 for instance) in which case you might want to use an alternative version of the Van Der Corpus Sequence that doesn't rely on bit operators:
不是所有的OpenGL驅動程序都支持位操作(例如WebGL和OpenGl ES 2.0),所以你可能想用另一個不依賴微操作版本的Van Der Corpus序列:
1 float VanDerCorpus(uint n, uint base) 2 { 3 float invBase = 1.0 / float(base); 4 float denom = 1.0; 5 float result = 0.0; 6 7 for(uint i = 0u; i < 32u; ++i) 8 { 9 if(n > 0u) 10 { 11 denom = mod(float(n), 2.0); 12 result += denom * invBase; 13 invBase = invBase / 2.0; 14 n = uint(float(n) / 2.0); 15 } 16 } 17 18 return result; 19 } 20 // ---------------------------------------------------------------------------- 21 vec2 HammersleyNoBitOps(uint i, uint N) 22 { 23 return vec2(float(i)/float(N), VanDerCorpus(i, 2u)); 24 }
Note that due to GLSL loop restrictions in older hardware the sequence loops over all possible 32
bits. This version is less performant, but does work on all hardware if you ever find yourself without bit operators.
注意,由於GLSL循環在舊硬件上的約束,序列在所有可能的32位上循環。這個版本的代碼不那么高效,但是能夠在所有硬件上工作。如果你發現你用不了位操作,就可以選這個。
GGX重要性采樣
Instead of uniformly or randomly (Monte Carlo) generating sample vectors over the integral's hemisphere Ω we'll generate sample vectors biased towards the general reflection orientation of the microsurface halfway vector based on the surface's roughness. The sampling process will be similar to what we've seen before: begin a large loop, generate a random (low-discrepancy) sequence value, take the sequence value to generate a sample vector in tangent space, transform to world space and sample the scene's radiance. What's different is that we now use a low-discrepancy sequence value as input to generate a sample vector:
與其在半球Ω上均勻或隨機地(蒙特卡羅)生成采樣向量,我們生成了朝着微平面半角向量方向偏差的采樣向量。采樣過程與我們之前見到的類似:開始一個大循環,生成隨機(低差異)序列值,在tangent空間用序列值生成采樣向量,變換到world空間,采集場景的輻射率。區別是,我們現在用低差異序列作為輸入來生成采樣向量。
1 const uint SAMPLE_COUNT = 4096u; 2 for(uint i = 0u; i < SAMPLE_COUNT; ++i) 3 { 4 vec2 Xi = Hammersley(i, SAMPLE_COUNT);
Additionally, to build a sample vector, we need some way of orienting and biasing the sample vector towards the specular lobe of some surface roughness. We can take the NDF as described in the Theory tutorial and combine the GGX NDF in the spherical sample vector process as described by Epic Games:
另外,為構造采樣向量,我們需要某種方式讓采樣向量朝向和偏向specular葉。我們可以用理論教程中的NDF函數,聯合GGX NDF,用於采樣向量的處理過程,如Epic游戲公司所述:
1 vec3 ImportanceSampleGGX(vec2 Xi, vec3 N, float roughness) 2 { 3 float a = roughness*roughness; 4 5 float phi = 2.0 * PI * Xi.x; 6 float cosTheta = sqrt((1.0 - Xi.y) / (1.0 + (a*a - 1.0) * Xi.y)); 7 float sinTheta = sqrt(1.0 - cosTheta*cosTheta); 8 9 // from spherical coordinates to cartesian coordinates 10 vec3 H; 11 H.x = cos(phi) * sinTheta; 12 H.y = sin(phi) * sinTheta; 13 H.z = cosTheta; 14 15 // from tangent-space vector to world-space sample vector 16 vec3 up = abs(N.z) < 0.999 ? vec3(0.0, 0.0, 1.0) : vec3(1.0, 0.0, 0.0); 17 vec3 tangent = normalize(cross(up, N)); 18 vec3 bitangent = cross(N, tangent); 19 20 vec3 sampleVec = tangent * H.x + bitangent * H.y + N * H.z; 21 return normalize(sampleVec); 22 }
This gives us a sample vector somewhat oriented around the expected microsurface's halfway vector based on some input roughness and the low-discrepancy sequence value Xi. Note that Epic Games uses the squared roughness for better visual results as based on Disney's original PBR research.
這給出了朝向微平面半角向量的采樣向量。注意,根據Disney的PBR研究,Epic游戲公司使用了粗糙度的平方,以求更好的視覺效果。
With the low-discrepancy Hammersley sequence and sample generation defined we can finalize the pre-filter convolution shader:
有了低差異序列和采樣生成,我們可以實現pre-filter卷積shader了:
1 #version 330 core 2 out vec4 FragColor; 3 in vec3 localPos; 4 5 uniform samplerCube environmentMap; 6 uniform float roughness; 7 8 const float PI = 3.14159265359; 9 10 float RadicalInverse_VdC(uint bits); 11 vec2 Hammersley(uint i, uint N); 12 vec3 ImportanceSampleGGX(vec2 Xi, vec3 N, float roughness); 13 14 void main() 15 { 16 vec3 N = normalize(localPos); 17 vec3 R = N; 18 vec3 V = R; 19 20 const uint SAMPLE_COUNT = 1024u; 21 float totalWeight = 0.0; 22 vec3 prefilteredColor = vec3(0.0); 23 for(uint i = 0u; i < SAMPLE_COUNT; ++i) 24 { 25 vec2 Xi = Hammersley(i, SAMPLE_COUNT); 26 vec3 H = ImportanceSampleGGX(Xi, N, roughness); 27 vec3 L = normalize(2.0 * dot(V, H) * H - V); 28 29 float NdotL = max(dot(N, L), 0.0); 30 if(NdotL > 0.0) 31 { 32 prefilteredColor += texture(environmentMap, L).rgb * NdotL; 33 totalWeight += NdotL; 34 } 35 } 36 prefilteredColor = prefilteredColor / totalWeight; 37 38 FragColor = vec4(prefilteredColor, 1.0); 39 }
We pre-filter the environment, based on some input roughness that varies over each mipmap level of the pre-filter cubemap (from 0.0
to 1.0
) and store the result in prefilteredColor. The resulting prefilteredColor is divided by the total sample weight, where samples with less influence on the final result (for small NdotL) contribute less to the final weight.
我們對環境進行預計算得到pre-filter,以粗糙度為輸入參數,在不同的mipmap層上,粗糙度不同(從0.0到1.0),將結果保存到prefilteredColor。prefilteredColor 被除以了采樣的總權重,其中影響小的采樣其權重就小。
捕捉pre-filter的mipmap層
What's left to do is let OpenGL pre-filter the environment map with different roughness values over multiple mipmap levels. This is actually fairly easy to do with the original setup of the irradiance tutorial:
剩下要做的,就是讓OpenGL對環境貼圖預計算pre-filter,將不同粗糙度的結果放到不同mipmap層上。有了輻照度教程的基礎設施,這相當簡單:
1 prefilterShader.use(); 2 prefilterShader.setInt("environmentMap", 0); 3 prefilterShader.setMat4("projection", captureProjection); 4 glActiveTexture(GL_TEXTURE0); 5 glBindTexture(GL_TEXTURE_CUBE_MAP, envCubemap); 6 7 glBindFramebuffer(GL_FRAMEBUFFER, captureFBO); 8 unsigned int maxMipLevels = 5; 9 for (unsigned int mip = 0; mip < maxMipLevels; ++mip) 10 { 11 // reisze framebuffer according to mip-level size. 12 unsigned int mipWidth = 128 * std::pow(0.5, mip); 13 unsigned int mipHeight = 128 * std::pow(0.5, mip); 14 glBindRenderbuffer(GL_RENDERBUFFER, captureRBO); 15 glRenderbufferStorage(GL_RENDERBUFFER, GL_DEPTH_COMPONENT24, mipWidth, mipHeight); 16 glViewport(0, 0, mipWidth, mipHeight); 17 18 float roughness = (float)mip / (float)(maxMipLevels - 1); 19 prefilterShader.setFloat("roughness", roughness); 20 for (unsigned int i = 0; i < 6; ++i) 21 { 22 prefilterShader.setMat4("view", captureViews[i]); 23 glFramebufferTexture2D(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, 24 GL_TEXTURE_CUBE_MAP_POSITIVE_X + i, prefilterMap, mip); 25 26 glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT); 27 renderCube(); 28 } 29 } 30 glBindFramebuffer(GL_FRAMEBUFFER, 0);
The process is similar to the irradiance map convolution, but this time we scale the framebuffer's dimensions to the appropriate mipmap scale, each mip level reducing the dimensions by 2. Additionally, we specify the mip level we're rendering into in glFramebufferTexture2D's last parameter and pass the roughness we're pre-filtering for to the pre-filter shader.
過程與輻照度貼圖卷積類似,但是這次我們需要將framebuffer的尺寸縮放到mipmap的大小,每個mipmap層都縮小一半。另外,我們將mipmap層數傳入glFramebufferTexture2D的最后參數,將粗糙度傳入shader。
This should give us a properly pre-filtered environment map that returns blurrier reflections the higher mip level we access it from. If we display the pre-filtered environment cubemap in the skybox shader and forecefully sample somewhat above its first mip level in its shader like so:
這會給我們一個合適的pre-filter環境貼圖,他的mipmap層越高,反射就越模糊。如果我們把pre-filter環境貼圖顯示到天空盒上,對第一層mipmap采樣:
vec3 envColor = textureLod(environmentMap, WorldPos, 1.2).rgb;
We get a result that indeed looks like a blurrier version of the original environment:
我們會得到模糊版本的原始環境:
If it looks somewhat similar you've successfully pre-filtered the HDR environment map. Play around with different mipmap levels to see the pre-filter map gradually change from sharp to blurry reflections on increasing mip levels.
這看起來有點像HDR環境貼圖的天空盒,證明你成功地得到了pre-filter貼圖。用不同mkpmap層顯示看看,pre-filter會逐漸從清晰到模糊。
Pre-filter卷積的瑕疵
While the current pre-filter map works fine for most purposes, sooner or later you'll come across several render artifacts that are directly related to the pre-filter convolution. I'll list the most common here including how to fix them.
盡管現在的pre-filter貼圖在大多數時候工作得很好,早晚你會發現幾個直接與pre-fitler有關的瑕疵。這里列舉幾個最常見的,並說明如何解決它們。
高粗糙度時的cubemap縫合線
Sampling the pre-filter map on surfaces with a rough surface means sampling the pre-filter map on some of its lower mip levels. When sampling cubemaps, OpenGL by default doesn't linearly interpolate across cubemap faces. Because the lower mip levels are both of a lower resolution and the pre-filter map is convoluted with a much larger sample lobe, the lack of between-cube-face filtering becomes quite apparent:
對更高粗糙度的表面采樣意味着對更深的mipmap層采樣。對cubemap采樣時,OpenGL默認不在face之間線性插值。因為更深的mipmap層既是低分辨率,又是卷積了更大的采樣葉,face之間的過濾損失變得十分明顯。
Luckily for us, OpenGL gives us the option to properly filter across cubemap faces by enabling GL_TEXTURE_CUBE_MAP_SEAMLESS:
幸運的是,OpenGL給了我們選項,可以適當地在face之間過濾。只需啟用:
glEnable(GL_TEXTURE_CUBE_MAP_SEAMLESS);
Simply enable this property somewhere at the start of your application and the seams will be gone.
在應用程序啟動的時候,啟用這個開關,縫合線就消失了。
Pre-fitler卷積中的亮點
Due to high frequency details and wildly varying light intensities in specular reflections, convoluting the specular reflections requires a large number of samples to properly account for the wildly varying nature of HDR environmental reflections. We already take a very large number of samples, but on some environments it might still not be enough at some of the rougher mip levels in which case you'll start seeing dotted patterns emerge around bright areas:
由於specular反射中的高頻細節和廣泛變化的強度,卷積需要很大的采樣量。雖然如此,在比較粗糙的mipmap上仍舊不夠,你會看到點狀的模式出現在亮光區域:
One option is to further increase the sample count, but this won't be enough for all environments. As described by Chetan Jags we can reduce this artifact by (during the pre-filter convolution) not directly sampling the environment map, but sampling a mip level of the environment map based on the integral's PDF and the roughness:
一個選擇是增加采樣量,但這對有的環境仍舊不夠。如Chetan Jags所述,我們可以通過(在pre-filter卷積中)不直接對環境貼圖采樣來減少瑕疵,而是基於積分的pdf和粗糙度對環境貼圖的一個mipmap層采樣:
1 float D = DistributionGGX(NdotH, roughness); 2 float pdf = (D * NdotH / (4.0 * HdotV)) + 0.0001; 3 4 float resolution = 512.0; // resolution of source cubemap (per face) 5 float saTexel = 4.0 * PI / (6.0 * resolution * resolution); 6 float saSample = 1.0 / (float(SAMPLE_COUNT) * pdf + 0.0001); 7 8 float mipLevel = roughness == 0.0 ? 0.0 : 0.5 * log2(saSample / saTexel);
Don't forget to enable trilinear filtering on the environment map you want to sample its mip levels from:
別忘了啟用三線過濾:
1 glBindTexture(GL_TEXTURE_CUBE_MAP, envCubemap); 2 glTexParameteri(GL_TEXTURE_CUBE_MAP, GL_TEXTURE_MIN_FILTER, GL_LINEAR_MIPMAP_LINEAR);
And let OpenGL generate the mipmaps after the cubemap's base texture is set:
在cubemap的第一個mipmap層完成之后生成后續的mipmap:
1 // convert HDR equirectangular environment map to cubemap equivalent 2 [...] 3 // then generate mipmaps 4 glBindTexture(GL_TEXTURE_CUBE_MAP, envCubemap); 5 glGenerateMipmap(GL_TEXTURE_CUBE_MAP);
This works surprisingly well and should remove most, if not all, dots in your pre-filter map on rougher surfaces.
這工作得出人意料得好,能夠移出大部分(或全部)pre-filter貼圖中的粗糙表面的點現象。
預計算BRDF
With the pre-filtered environment up and running, we can focus on the second part of the split-sum approximation: the BRDF. Let's briefly review the specular split sum approximation again:
隨着pre-fitler環境貼圖上線運行,我們可以關注拆分求和近似的第二個部分:BRDF。我們簡單地回憶一下拆分求和近似:
We've pre-computed the left part of the split sum approximation in the pre-filter map over different roughness levels. The right side requires us to convolute the BRDF equation over the angle n⋅ωo, the surface roughness and Fresnel's F0. This is similar to integrating the specular BRDF with a solid-white environment or a constant radiance Li of 1.0
. Convoluting the BRDF over 3 variables is a bit much, but we can move F0 out of the specular BRDF equation:
我們已經預計算了左邊的積分。右邊的積分要求我們圍繞角度n⋅ωo、粗糙度和菲涅耳嘗試F0隊BRDF卷積。這類似於對純白色環境貼圖或輻照度為常量1.0的BRDF積分。對3個變量卷積太難了,但我們可以把F0移到積分外面:
With F being the Fresnel equation. Moving the Fresnel denominator to the BRDF gives us the following equivalent equation:
其中F代表菲涅耳方程。將菲涅耳分母移到BRDF,得到:
Substituting the right-most F with the Fresnel-Schlick approximation gives us:
用Fresnel-Schlick近似代替右邊的F,得到:
Let's replace (1−ωo⋅h)5 by α to make it easier to solve for F0:
我們用α代替(1−ωo⋅h)5,從而對收拾F0簡單點:
Then we split the Fresnel function F over two integrals:
然后我們把菲涅耳方程F拆分為2個積分:
This way, F0 is constant over the integral and we can take F0 out of the integral. Next, we substitute α back to its original form giving us the final split sum BRDF equation:
這樣,F0在積分上就是常量,我們可以將F0移出積分。下一步,我們將α替換回它原來的形式。現在就得到了最終的拆分求和BRDF方程:
The two resulting integrals represent a scale and a bias to F0 respectively. Note that as f(p,ωi,ωo) already contains a term for F they both cancel out, removing F from f.
這兩個積分分別表示對F0的縮放和偏移。注意,由於f(p,ωi,ωo)已經包含了F項,他們抵消了,從f中移除了。
In a similar fashion to the earlier convoluted environment maps, we can convolute the BRDF equations on their inputs: the angle between n and ωo and the roughness, and store the convoluted result in a texture. We store the convoluted results in a 2D lookup texture (LUT) known as a BRDF integration map that we later use in our PBR lighting shader to get the final convoluted indirect specular result.
類似之前的卷積操作,我們可以針對BRDF的輸入參數進行卷積:n和ωo之間的角度,以及粗糙度。然后將結果保存到貼圖中。我們將卷積結果保存到一個二維查詢紋理(LUT),即BRDF積分貼圖,稍后用於PBR光照shader,得到最終的非直接specular光照結果。
The BRDF convolution shader operates on a 2D plane, using its 2D texture coordinates directly as inputs to the BRDF convolution (NdotV and roughness). The convolution code is largely similar to the pre-filter convolution, except that it now processes the sample vector according to our BRDF's geometry function and Fresnel-Schlick's approximation:
BRDF卷積shader在二維平面上工作,它使用二維紋理坐標直接作為輸入參數,以此卷積BRDF(NdotV 和roughness)。卷積代碼和pre-filter卷積很相似,除了它現在根據幾何函數G和Fresnel-Schlick近似來處理采樣限量:

1 vec2 IntegrateBRDF(float NdotV, float roughness) 2 { 3 vec3 V; 4 V.x = sqrt(1.0 - NdotV*NdotV); 5 V.y = 0.0; 6 V.z = NdotV; 7 8 float A = 0.0; 9 float B = 0.0; 10 11 vec3 N = vec3(0.0, 0.0, 1.0); 12 13 const uint SAMPLE_COUNT = 1024u; 14 for(uint i = 0u; i < SAMPLE_COUNT; ++i) 15 { 16 vec2 Xi = Hammersley(i, SAMPLE_COUNT); 17 vec3 H = ImportanceSampleGGX(Xi, N, roughness); 18 vec3 L = normalize(2.0 * dot(V, H) * H - V); 19 20 float NdotL = max(L.z, 0.0); 21 float NdotH = max(H.z, 0.0); 22 float VdotH = max(dot(V, H), 0.0); 23 24 if(NdotL > 0.0) 25 { 26 float G = GeometrySmith(N, V, L, roughness); 27 float G_Vis = (G * VdotH) / (NdotH * NdotV); 28 float Fc = pow(1.0 - VdotH, 5.0); 29 30 A += (1.0 - Fc) * G_Vis; 31 B += Fc * G_Vis; 32 } 33 } 34 A /= float(SAMPLE_COUNT); 35 B /= float(SAMPLE_COUNT); 36 return vec2(A, B); 37 } 38 // ---------------------------------------------------------------------------- 39 void main() 40 { 41 vec2 integratedBRDF = IntegrateBRDF(TexCoords.x, TexCoords.y); 42 FragColor = integratedBRDF; 43 }
As you can see the BRDF convolution is a direct translation from the mathematics to code. We take both the angle θ and the roughness as input, generate a sample vector with importance sampling, process it over the geometry and the derived Fresnel term of the BRDF, and output both a scale and a bias to F0 for each sample, averaging them in the end.
你可以看到,BRDF卷積就是直接把數學公式轉換為代碼。我們以角度θ和粗糙度為輸入參數,用重要性采樣方法生成采樣向量,用幾何函數G和菲涅耳項處理它,輸出F0的縮放和偏移量,最后求平均值。
You might've recalled from the theory tutorial that the geometry term of the BRDF is slightly different when used alongside IBL as its k variable has a slightly different interpretation:
你可能回想起在理論教程中,BRDF的幾何函數項在用於IBL時有所不同,k變量的解釋是不同的:
Since the BRDF convolution is part of the specular IBL integral we'll use kIBL for the Schlick-GGX geometry function:
既然BRDF卷積是specular的IBL的一部分,我們將在幾何函數中用kIBL這個解釋:
1 float GeometrySchlickGGX(float NdotV, float roughness) 2 { 3 float a = roughness; 4 float k = (a * a) / 2.0; 5 6 float nom = NdotV; 7 float denom = NdotV * (1.0 - k) + k; 8 9 return nom / denom; 10 } 11 // ---------------------------------------------------------------------------- 12 float GeometrySmith(vec3 N, vec3 V, vec3 L, float roughness) 13 { 14 float NdotV = max(dot(N, V), 0.0); 15 float NdotL = max(dot(N, L), 0.0); 16 float ggx2 = GeometrySchlickGGX(NdotV, roughness); 17 float ggx1 = GeometrySchlickGGX(NdotL, roughness); 18 19 return ggx1 * ggx2; 20 }
Note that while k takes a as its parameter we didn't square roughness as a as we originally did for other interpretations of a; likely as a is squared here already. I'm not sure whether this is an inconsistency on Epic Games' part or the original Disney paper, but directly translating roughness to a gives the BRDF integration map that is identical to Epic Games' version.
注意,盡管k以a為輸入參數,我們沒有將roughness 的平方作為a,我們之前卻是這么做的。可能這里的a已經平方過了。我不確定這是Epic游戲公司的不一致性,還是最初的Disney文獻的問題,但是直接將roughness 傳遞給a,確實得到了與Epic游戲公司相同的BRDF積分貼圖。
Finally, to store the BRDF convolution result we'll generate a 2D texture of a 512 by 512 resolution.
最后,為保存BRDF卷積結果,我們創建一個二維紋理,尺寸為512x512:
1 unsigned int brdfLUTTexture; 2 glGenTextures(1, &brdfLUTTexture); 3 4 // pre-allocate enough memory for the LUT texture. 5 glBindTexture(GL_TEXTURE_2D, brdfLUTTexture); 6 glTexImage2D(GL_TEXTURE_2D, 0, GL_RG16F, 512, 512, 0, GL_RG, GL_FLOAT, 0); 7 glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE); 8 glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE); 9 glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR); 10 glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
Note that we use a 16-bit precision floating format as recommended by Epic Games. Be sure to set the wrapping mode to GL_CLAMP_TO_EDGE to prevent edge sampling artifacts.
注意,我們使用16位精度浮點數格式,這是Epic游戲公司推薦的。確保wrapping模式為GL_CLAMP_TO_EDGE ,防止邊緣采樣瑕疵。
Then, we re-use the same framebuffer object and run this shader over an NDC screen-space quad:
然后,我們復用相同的Framebuffer對象,在NDC空間(畫一個鋪滿屏幕的四邊形)運行下面的shader:
1 glBindFramebuffer(GL_FRAMEBUFFER, captureFBO); 2 glBindRenderbuffer(GL_RENDERBUFFER, captureRBO); 3 glRenderbufferStorage(GL_RENDERBUFFER, GL_DEPTH_COMPONENT24, 512, 512); 4 glFramebufferTexture2D(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_TEXTURE_2D, brdfLUTTexture, 0); 5 6 glViewport(0, 0, 512, 512); 7 brdfShader.use(); 8 glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT); 9 RenderQuad(); 10 11 glBindFramebuffer(GL_FRAMEBUFFER, 0);
The convoluted BRDF part of the split sum integral should give you the following result:
拆分求和積分的BRDF部分的卷積會得到如下圖所示的結果:
With both the pre-filtered environment map and the BRDF 2D LUT we can re-construct the indirect specular integral according to the split sum approximation. The combined result then acts as the indirect or ambient specular light.
有了pre-filter環境貼圖和BRDF的二維LUT貼圖,我們可以根據拆分求和近似來重構非直接specular積分。相乘的結果就是非直接或環境specular光。
IBL反射完成
To get the indirect specular part of the reflectance equation up and running we need to stitch both parts of the split sum approximation together. Let's start by adding the pre-computed lighting data to the top of our PBR shader:
為了讓反射率方程中的非直接specular部分上線運行,我們需要將拆分求和近似的兩部分釘在一起。開始時,讓我們把預計算的光照數據放到PBR shaer的開頭:
1 uniform samplerCube prefilterMap; 2 uniform sampler2D brdfLUT;
First, we get the indirect specular reflections of the surface by sampling the pre-filtered environment map using the reflection vector. Note that we sample the appropriate mip level based on the surface roughness, giving rougher surfaces blurrierspecular reflections.
首先,使用反射向量,通過對pre-filter環境貼圖采樣,我們得到表面的非直接specular反射強度。注意,我們根據表面的粗糙度度對最接近的mipmap層采樣,這會讓更粗糙的表面表現出更模糊的反射效果。
1 void main() 2 { 3 [...] 4 vec3 R = reflect(-V, N); 5 6 const float MAX_REFLECTION_LOD = 4.0; 7 vec3 prefilteredColor = textureLod(prefilterMap, R, roughness * MAX_REFLECTION_LOD).rgb; 8 [...] 9 }
In the pre-filter step we only convoluted the environment map up to a maximum of 5 mip levels (0 to 4), which we denote here as MAX_REFLECTION_LOD to ensure we don't sample a mip level where there's no (relevant) data.
在pre-tilter步驟中我們只卷積了5個mipmap層(0-4),我們這里用MAX_REFLECTION_LOD 確保不對沒有數據的位置采樣。
Then we sample from the BRDF lookup texture given the material's roughness and the angle between the normal and view vector:
給定材質的粗糙度和角度(法線和觀察者之間的角度),我們從BRDF的查詢紋理中采樣:
1 vec3 F = FresnelSchlickRoughness(max(dot(N, V), 0.0), F0, roughness); 2 vec2 envBRDF = texture(brdfLUT, vec2(max(dot(N, V), 0.0), roughness)).rg; 3 vec3 specular = prefilteredColor * (F * envBRDF.x + envBRDF.y);
Given the scale and bias to F0 (here we're directly using the indirect Fresnel result F) from the BRDF lookup texture we combine this with the left pre-filter portion of the IBL reflectance equation and re-construct the approximated integral result as specular.
從BRDF的查詢紋理中找到了F0的縮放和偏移(這里我們直接使用了非直接菲涅耳函數F),我們聯合IBL反射率方程左邊的pre-filter部分,重構出specular的近似積分值。
This gives us the indirect specular part of the reflectance equation. Now, combine this with the diffuse part of the reflectance equation from the last tutorial and we get the full PBR IBL result:
這就得到了反射率方程的非直接specular部分。現在,聯合反射率方程的diffuse部分(從上一篇教程中)我們就得到了PBR全部的IBL結果:
1 vec3 F = FresnelSchlickRoughness(max(dot(N, V), 0.0), F0, roughness); 2 3 vec3 kS = F; 4 vec3 kD = 1.0 - kS; 5 kD *= 1.0 - metallic; 6 7 vec3 irradiance = texture(irradianceMap, N).rgb; 8 vec3 diffuse = irradiance * albedo; 9 10 const float MAX_REFLECTION_LOD = 4.0; 11 vec3 prefilteredColor = textureLod(prefilterMap, R, roughness * MAX_REFLECTION_LOD).rgb; 12 vec2 envBRDF = texture(brdfLUT, vec2(max(dot(N, V), 0.0), roughness)).rg; 13 vec3 specular = prefilteredColor * (F * envBRDF.x + envBRDF.y); 14 15 vec3 ambient = (kD * diffuse + specular) * ao;
Note that we don't multiply specular by kS as we already have a Fresnel multiplication in there.
注意,我們沒有用kS 乘以specular ,因為我們這里已經有個菲涅耳乘過了。
Now, running this exact code on the series of spheres that differ by their roughness and metallic properties we finally get to see their true colors in the final PBR renderer:
現在,運行本系列教程中不同粗糙度和金屬度球體的那個代碼,我們最終得以看它們在PBR渲染下的真正顏色:
We could even go wild, and use some cool textured PBR materials:
我們還可以更瘋狂點,用一些酷炫的紋理(來自PBR materials):
Or load this awesome free PBR 3D model by Andrew Maximov:
或者加載這個棒棒的免費PBR三維模型(Andrew Maximov所做):
I'm sure we can all agree that our lighting now looks a lot more convincing. What's even better, is that our lighting looks physically correct, regardless of which environment map we use. Below you'll see several different pre-computed HDR maps, completely changing the lighting dynamics, but still looking physically correct without changing a single lighting variable!
我確定我們都同意這一點:我們的光照現在看起來真實得多了。更好的是,我們的光照是物理正確的,無論我們用那個環境貼圖。下面你將看到幾個不同的預計算的HDR貼圖,完全改變了光照效果,但是仍舊是看起來物理正確的。這不需要改變一丟丟光照參數!
Well, this PBR adventure turned out to be quite a long journey. There are a lot of steps and thus a lot that could go wrong so carefully work your way through the sphere scene or textured scene code samples (including all shaders) if you're stuck, or check and ask around in the comments.
此次PBR探險之旅最終顯得相當漫長。有很多步驟,所以有很多可能出錯的地方,所以,如果卡住了,仔細小心地檢查sphere scene 或textured scene代碼(包括所有的shader),或者檢查下注釋。
接下來?
Hopefully, by the end of this tutorial you should have a pretty clear understanding of what PBR is about, and even have an actual PBR renderer up and running. In these tutorials, we've pre-computed all the relevant PBR image-based lighting data at the start of our application, before the render loop. This was fine for educational purposes, but not too great for any practical use of PBR. First, the pre-computation only really has to be done once, not at every startup. And second, the moment you use multiple environment maps you'll have to pre-compute each and every one of them at every startup which tends to build up.
但願,在本教程結束時你已經對PBR有了很清除的理解,甚至有了可能實際運行起來的渲染器。這些教程中,我們在程序開始時預計算了所有相關的IBL數據,然后啟動渲染循環。這對教程是沒問題的,但是對任何實際應用的軟件是不可行的。首先,預計算只需要完成一次,不需要每次啟動都做。其次,使用多環境貼圖時,你不得不每次都把它們全算一遍,這太浪費了。
For this reason you'd generally pre-compute an environment map into an irradiance and pre-filter map just once, and then store it on disk (note that the BRDF integration map isn't dependent on an environment map so you only need to calculate or load it once). This does mean you'll need to come up with a custom image format to store HDR cubemaps, including their mip levels. Or, you'll store (and load) it as one of the available formats (like .dds that supports storing mip levels).
因此,你應該將預計算結果保存到硬盤上(注意BRDF積分貼圖不依賴環境貼圖,所以你只需計算和加載一次)。這意味着你需要找一個自定義的圖片格式來存儲HDR的cubemap,包括它們的各個mipmap層。或者,你用現有的格式(比如*.dss支持保存多個mipmap層)。
Furthermore, we've described the total process in these tutorials, including generating the pre-computed IBL images to help further our understanding of the PBR pipeline. But, you'll be just as fine by using several great tools like cmftStudio or IBLBaker to generate these pre-computed maps for you.
此外,我們在教程中描述了預計算的所有過程,這是為了加深對PBR管道的理解。你完全可以用一些棒棒的工具(例如cmftStudio 或IBLBaker )來為你完成這些預計算。
One point we've skipped over is pre-computed cubemaps as reflection probes: cubemap interpolation and parallax correction. This is the process of placing several reflection probes in your scene that take a cubemap snapshot of the scene at that specific location, which we can then convolute as IBL data for that part of the scene. By interpolating between several of these probes based on the camera's vicinity we can achieve local high-detail image-based lighting that is simply limited by the amount of reflection probes we're willing to place. This way, the image-based lighting could correctly update when moving from a bright outdoor section of a scene to a darker indoor section for instance. I'll write a tutorial about reflection probes somewhere in the future, but for now I recommend the article by Chetan Jags below to give you a head start.
我們跳過的一個點,是將預計算的cubemap用作反射探針:cubemap插值和視差校正。這個說的是,在你的場景中放置幾個反射探針,它們在各自的位置拍個cubemap快照,卷積為IBL數據,用作場景中那一部分的光照計算。通過在camera最近的幾個探針間插值,我們可以實現局部高細節度的IBL光照(只要我們願意放置夠多的探針)。這樣,當從一個明亮的門外部分移動到比較暗的門內部分時,IBL光照可以正確地更新。未來我將寫一篇關於探針的教程,但是現在我推薦下面Chetan Jags的文章,你可以從此開始。
更多閱讀
- Real Shading in Unreal Engine 4: explains Epic Games' split sum approximation. This is the article the IBL PBR code is based of.
- 解釋了Epic游戲公司的拆分求和近似方案。本文基於此而作。
- Physically Based Shading and Image Based Lighting: great blog post by Trent Reed about integrating specular IBL into a PBR pipeline in real time.
- 了不起的Trent Reed的博客,關於實時的將BL集成進PBR管道。
- Image Based Lighting: very extensive write-up by Chetan Jags about specular-based image-based lighting and several of its caveats, including light probe interpolation.
- 由Chetan Jags廣泛評論的基於specular和IBL的光照及其警告,包括光探針插值。
- Moving Frostbite to PBR: well written and in-depth overview of integrating PBR into a AAA game engine by Sébastien Lagarde and Charles de Rousiers.
- 由bastien Lagarde和Charles de Rousiers編寫的深度好文,探討了如何將PBR集成到AAA游戲引擎。
- Physically Based Rendering – Part Three: high level overview of IBL lighting and PBR by the JMonkeyEngine team.
- 由JMonkeyEngine小組提供的IBL光照和PBR高層概覽。
- Implementation Notes: Runtime Environment Map Filtering for Image Based Lighting: extensive write-up by Padraic Hennessy about pre-filtering HDR environment maps and significanly optimizing the sample process.
- 由Padraic Hennessy的評論,關於pre-filter的HDR環境貼圖及其高度優化。