Cesium源碼剖析---Ambient Occlusion(環境光遮蔽)


  Ambient Occlusion簡稱AO,中文沒有太確定的叫法,一般譯作環境光遮蔽。百度百科上對AO的解釋是這樣的:AO是來描繪物體和物體相交或靠近的時候遮擋周圍漫反射光線的效果,可以解決或改善漏光、飄和陰影不實等問題,解決或改善場景中縫隙、褶皺與牆角、角線以及細小物體等的表現不清晰問題,綜合改善細節尤其是暗部陰影,增強空間的層次感、真實感,同時加強和改善畫面明暗對比,增強畫面的藝術性。AO簡單來說就是根據周圍物體對光線的遮擋程度,改變明暗效果。AO具體理論原理在網上都可以找到,感興趣的可以去查閱,此處只把Cesium中AO的實現過程作一下介紹。閑言少敘,直接上代碼。

1. 開啟AO及效果

  在Cesium中開啟AO效果非常簡單,和之前講的開啟Silhouette效果類似,代碼如下:

1 var ambientOcclusion = viewer.scene.postProcessStages.ambientOcclusion;
2 ambientOcclusion.enabled = true;
3 ambientOcclusion.uniforms.ambientOcclusionOnly = false;
4 ambientOcclusion.uniforms.intensity = 3;
5 ambientOcclusion.uniforms.bias = 0.1;
6 ambientOcclusion.uniforms.lengthCap = 0.03;
7 ambientOcclusion.uniforms.stepSize = 1;
8 ambientOcclusion.uniforms.blurStepSize = 0.86;

沒有開啟AO效果如下圖一,開啟AO效果如下圖二,單純的AO圖如圖三:

2. JS內部代碼實現

  在PostProcessStageLibrary類中添加AO功能的代碼如下:

  1 PostProcessStageLibrary.createAmbientOcclusionStage = function() {
  2         var generate = new PostProcessStage({
  3             name : 'czm_ambient_occlusion_generate',
  4             fragmentShader : AmbientOcclusionGenerate,
  5             uniforms : {
  6                 intensity : 3.0,
  7                 bias : 0.1,
  8                 lengthCap : 0.26,
  9                 stepSize : 1.95,
 10                 frustumLength : 1000.0,
 11                 randomTexture : undefined
 12             }
 13         });
 14         var blur = createBlur('czm_ambient_occlusion_blur');
 15         blur.uniforms.stepSize = 0.86;
 16         var generateAndBlur = new PostProcessStageComposite({
 17             name : 'czm_ambient_occlusion_generate_blur',
 18             stages : [generate, blur]
 19         });
 20 
 21         var ambientOcclusionModulate = new PostProcessStage({
 22             name : 'czm_ambient_occlusion_composite',
 23             fragmentShader : AmbientOcclusionModulate,
 24             uniforms : {
 25                 ambientOcclusionOnly : false,
 26                 ambientOcclusionTexture : generateAndBlur.name
 27             }
 28         });
 29 
 30         var uniforms = {};
 31         defineProperties(uniforms, {
 32             intensity : {
 33                 get : function() {
 34                     return generate.uniforms.intensity;
 35                 },
 36                 set : function(value) {
 37                     generate.uniforms.intensity = value;
 38                 }
 39             },
 40             bias : {
 41                 get : function() {
 42                     return generate.uniforms.bias;
 43                 },
 44                 set : function(value) {
 45                     generate.uniforms.bias = value;
 46                 }
 47             },
 48             lengthCap : {
 49                 get : function() {
 50                     return generate.uniforms.lengthCap;
 51                 },
 52                 set : function(value) {
 53                     generate.uniforms.lengthCap = value;
 54                 }
 55             },
 56             stepSize : {
 57                 get : function() {
 58                     return generate.uniforms.stepSize;
 59                 },
 60                 set : function(value) {
 61                     generate.uniforms.stepSize = value;
 62                 }
 63             },
 64             frustumLength : {
 65                 get : function() {
 66                     return generate.uniforms.frustumLength;
 67                 },
 68                 set : function(value) {
 69                     generate.uniforms.frustumLength = value;
 70                 }
 71             },
 72             randomTexture : {
 73                 get : function() {
 74                     return generate.uniforms.randomTexture;
 75                 },
 76                 set : function(value) {
 77                     generate.uniforms.randomTexture = value;
 78                 }
 79             },
 80             delta : {
 81                 get : function() {
 82                     return blur.uniforms.delta;
 83                 },
 84                 set : function(value) {
 85                     blur.uniforms.delta = value;
 86                 }
 87             },
 88             sigma : {
 89                 get : function() {
 90                     return blur.uniforms.sigma;
 91                 },
 92                 set : function(value) {
 93                     blur.uniforms.sigma = value;
 94                 }
 95             },
 96             blurStepSize : {
 97                 get : function() {
 98                     return blur.uniforms.stepSize;
 99                 },
100                 set : function(value) {
101                     blur.uniforms.stepSize = value;
102                 }
103             },
104             ambientOcclusionOnly : {
105                 get : function() {
106                     return ambientOcclusionModulate.uniforms.ambientOcclusionOnly;
107                 },
108                 set : function(value) {
109                     ambientOcclusionModulate.uniforms.ambientOcclusionOnly = value;
110                 }
111             }
112         });
113 
114         return new PostProcessStageComposite({
115             name : 'czm_ambient_occlusion',
116             stages : [generateAndBlur, ambientOcclusionModulate],
117             inputPreviousStageTexture : false,
118             uniforms : uniforms
119         });
120     };

   從上面的代碼可以看出,代碼創建了generate、blur、ambientOcclusionModulate三個處理階段。generate負責計算屏幕上每個像素的遮擋值,並生成一張灰度圖;blur是對generate生成的灰度圖就行模糊平滑處理;ambientOcclusionModulate負責根據灰度圖對原始場景顏色就行調整。下面分別對這三個階段進行詳細介紹。

2.1 generate代碼實現

  計算像素遮蔽因子的過程可以概括為:在像素周圍,計算采樣點對中心像素的遮蔽值,然后對遮蔽值進行累加,最后得到中心像素的遮蔽值。具體實現可以分為兩部分:

  1:在視空間下計算像素代表面片的法向量

  將中心像素以及上下左右的四個像素轉換到視空間下,就得到了這五個像素在視空間下的三維位置,記為posInCamera、posInCameraUp、posInCameraDown、posInCameraLeft、posInCameraRight。通過上下左右四個點與中心點的差值得到up、down、left、right四個向量,分別從上下向量、左右向量中選取模較小的向量,記為DX、DY。最后通過normalize(cross(DY, DX))得到中心像素的法向量。整個過程如下圖所示:

  

  glsl對應的代碼如下:

 1 vec4 clipToEye(vec2 uv, float depth)
 2 {
 3     vec2 xy = vec2((uv.x * 2.0 - 1.0), ((1.0 - uv.y) * 2.0 - 1.0));
 4     vec4 posEC = czm_inverseProjection * vec4(xy, depth, 1.0);
 5     posEC = posEC / posEC.w;
 6     return posEC;
 7 }
 8 
 9 //Reconstruct Normal Without Edge Removation
10 vec3 getNormalXEdge(vec3 posInCamera, float depthU, float depthD, float depthL, float depthR, vec2 pixelSize)
11 {
12     vec4 posInCameraUp = clipToEye(v_textureCoordinates - vec2(0.0, pixelSize.y), depthU);
13     vec4 posInCameraDown = clipToEye(v_textureCoordinates + vec2(0.0, pixelSize.y), depthD);
14     vec4 posInCameraLeft = clipToEye(v_textureCoordinates - vec2(pixelSize.x, 0.0), depthL);
15     vec4 posInCameraRight = clipToEye(v_textureCoordinates + vec2(pixelSize.x, 0.0), depthR);
16 
17     vec3 up = posInCamera.xyz - posInCameraUp.xyz;
18     vec3 down = posInCameraDown.xyz - posInCamera.xyz;
19     vec3 left = posInCamera.xyz - posInCameraLeft.xyz;
20     vec3 right = posInCameraRight.xyz - posInCamera.xyz;
21 
22     vec3 DX = length(left) < length(right) ? left : right;
23     vec3 DY = length(up) < length(down) ? up : down;
24 
25     return normalize(cross(DY, DX));
26 }
27 
28 void main(void)
29 {
30     float depth = czm_readDepth(depthTexture, v_textureCoordinates);
31     vec4 posInCamera = clipToEye(v_textureCoordinates, depth);
32 
33     if (posInCamera.z > frustumLength)
34     {
35         gl_FragColor = vec4(1.0);
36         return;
37     }
38 
39     vec2 pixelSize = 1.0 / czm_viewport.zw;
40     float depthU = czm_readDepth(depthTexture, v_textureCoordinates- vec2(0.0, pixelSize.y));
41     float depthD = czm_readDepth(depthTexture, v_textureCoordinates+ vec2(0.0, pixelSize.y));
42     float depthL = czm_readDepth(depthTexture, v_textureCoordinates- vec2(pixelSize.x, 0.0));
43     float depthR = czm_readDepth(depthTexture, v_textureCoordinates+ vec2(pixelSize.x, 0.0));
44     vec3 normalInCamera = getNormalXEdge(posInCamera.xyz, depthU, depthD, depthL, depthR, pixelSize);
45 }

   2: 計算周圍空間對面片的遮蔽值

  在上一步得到了視空間下的面片法向量,接下來就是計算周圍空間對面片的遮擋值。過程可以概括為以下幾個步驟:

  (1)選取要參與遮蔽值計算的空間。周圍空間范圍的選擇是通過在像素坐標系下以中心像素為圓心,以設定值為半徑,得到一個圓,但並不是要取圓中的所有像素,因為這樣會帶來很大的計算量。取而代之的是在四個方向上進行采樣,方向值引入一個隨機擾動,可以避免出現特別規則的陰影效果。

  (2)在選取了方向后,就在該方向上根據設定的采樣步長進行采樣,得到新的像素,將該像素轉換到視空間下,記為stepPosInCamera。

  (3)通過stepPosInCamera與posInCamera作差值得到向量diffVec,向量的長度len代表了該點與中心點的距離。通過lengthCap這個值對采樣的空間距離進行限制,超出該值的采樣點將作廢。

  (4)通過向量與法向量的點乘,得到值dotVal。dotVal實際表示了兩個向量之間夾角的大小,該值越大,表示與法向量夾角越小,遮蔽值越大。

  (5)通過len的長度計算該位置遮蔽值的權重,得到一個遮蔽值localAO。

  (6)在同一個方向上選取一個最大的localAO值作為該方向上的ao值,然后將四個方向上的ao值相加,並除以4,得到新的ao。

  (7)根據給定的intensity參數對ao值就行冪次變換,得到最終的ao值。

  選取采樣方向的效果如下圖所示:

  對應的glsl代碼如下:

 1 float ao = 0.0;
 2     vec2 sampleDirection = vec2(1.0, 0.0);
 3     float gapAngle = 90.0 * czm_radiansPerDegree;
 4 
 5     // RandomNoise
 6     float randomVal = texture2D(randomTexture, v_textureCoordinates).x;
 7 
 8     float inverseViewportWidth = 1.0 / czm_viewport.z;
 9     float inverseViewportHeight = 1.0 / czm_viewport.w;
10 
11     //Loop for each direction
12     for (int i = 0; i < 4; i++)
13     {
14         float newGapAngle = gapAngle * (float(i) + randomVal);
15         float cosVal = cos(newGapAngle);
16         float sinVal = sin(newGapAngle);
17 
18         //Rotate Sampling Direction
19         vec2 rotatedSampleDirection = vec2(cosVal * sampleDirection.x - sinVal * sampleDirection.y, sinVal * sampleDirection.x + cosVal * sampleDirection.y);
20         float localAO = 0.0;
21         float localStepSize = stepSize;
22 
23         //Loop for each step
24         for (int j = 0; j < 6; j++)
25         {
26             vec2 directionWithStep = vec2(rotatedSampleDirection.x * localStepSize * inverseViewportWidth, rotatedSampleDirection.y * localStepSize * inverseViewportHeight);
27             vec2 newCoords = directionWithStep + v_textureCoordinates;
28 
29             //Exception Handling
30             if(newCoords.x > 1.0 || newCoords.y > 1.0 || newCoords.x < 0.0 || newCoords.y < 0.0)
31             {
32                 break;
33             }
34 
35             float stepDepthInfo = czm_readDepth(depthTexture, newCoords);
36             vec4 stepPosInCamera = clipToEye(newCoords, stepDepthInfo);
37             vec3 diffVec = stepPosInCamera.xyz - posInCamera.xyz;
38             float len = length(diffVec);
39 
40             if (len > lengthCap)
41             {
42                 break;
43             }
44 
45             float dotVal = clamp(dot(normalInCamera, normalize(diffVec)), 0.0, 1.0 );
46             float weight = len / lengthCap;
47             weight = 1.0 - weight * weight;
48 
49             if (dotVal < bias)
50             {
51                 dotVal = 0.0;
52             }
53 
54             localAO = max(localAO, dotVal * weight);
55             localStepSize += stepSize;
56         }
57         ao += localAO;
58     }
59 
60     ao /= 4.0;
61     ao = 1.0 - clamp(ao, 0.0, 1.0);
62     ao = pow(ao, intensity);
63     gl_FragColor = vec4(vec3(ao), 1.0);

 

 2.2 blur代碼實現

  對上一步得到的遮蔽值圖需要進行平滑,采用的是高斯平滑算法。實現思路就是在水平方向和垂直方向上根據權重疊加相鄰像素值,算法講解可以查詢https://developer.nvidia.com/gpugems/GPUGems3/gpugems3_ch40.html。glsl實現代碼如下:

 1 #define SAMPLES 8
 2 
 3 uniform float delta;
 4 uniform float sigma;
 5 uniform float direction; // 0.0 for x direction, 1.0 for y direction
 6 
 7 uniform sampler2D colorTexture;
 8 
 9 #ifdef USE_STEP_SIZE
10 uniform float stepSize;
11 #else
12 uniform vec2 step;
13 #endif
14 
15 varying vec2 v_textureCoordinates;
16 void main()
17 {
18     vec2 st = v_textureCoordinates;
19     vec2 dir = vec2(1.0 - direction, direction);
20 
21 #ifdef USE_STEP_SIZE
22     vec2 step = vec2(stepSize / czm_viewport.zw);
23 #else
24     vec2 step = step;
25 #endif
26 
27     vec3 g;
28     g.x = 1.0 / (sqrt(czm_twoPi) * sigma);
29     g.y = exp((-0.5 * delta * delta) / (sigma * sigma));
30     g.z = g.y * g.y;
31 
32     vec4 result = texture2D(colorTexture, st) * g.x;
33     for (int i = 1; i < SAMPLES; ++i)
34     {
35         g.xy *= g.yz;
36 
37         vec2 offset = float(i) * dir * step;
38         result += texture2D(colorTexture, st - offset) * g.x;
39         result += texture2D(colorTexture, st + offset) * g.x;
40     }
41     gl_FragColor = result;
42 }

 2.3 ambientOcclusionModulate代碼實現

  ambientOcclusionModulate非常簡單,就是根據ao值對原始場景的圖片就行明暗處理,產生陰影效果。glsl代碼如下:

 1 uniform sampler2D colorTexture;
 2 uniform sampler2D ambientOcclusionTexture;
 3 uniform bool ambientOcclusionOnly;
 4 varying vec2 v_textureCoordinates;
 5 
 6 void main(void)
 7 {
 8     vec3 color = texture2D(colorTexture, v_textureCoordinates).rgb;
 9     vec3 ao = texture2D(ambientOcclusionTexture, v_textureCoordinates).rgb;
10     gl_FragColor.rgb = ambientOcclusionOnly ? ao : ao * color;
11 }

 3. 總結

  Cesium中AO的實現方式屬於HBAO(Horizon-based Ambient Occlusion),相對於傳統的SSAO,HBAO對陰影的處理效果更好,使場景更加真實。終於把HDAO的實現原理徹底搞明白了,一個字:爽,哈哈哈!!!

PS:Cesium交流可以掃碼加群,期待你的加入!!!


免責聲明!

本站轉載的文章為個人學習借鑒使用,本站對版權不負任何法律責任。如果侵犯了您的隱私權益,請聯系本站郵箱yoyou2525@163.com刪除。



 
粵ICP備18138465號   © 2018-2025 CODEPRJ.COM