最近有個功能, 要渲染從主相機視角看到的另一個相機的可視范圍和不可見范圍, 大概如下圖 :
簡單來說就是主相機視野和觀察者相機視野重合的地方, 能標記出觀察者相機的可見和不可見, 實現原理就跟 ShadowMap 一樣, 就是有關深度圖, 世界坐標轉換之類的, 每次有此類的功能都會很悲催, 雖然它的邏輯很簡單, 可是用Unity3D做起來很麻煩...
原理 : 在觀察者相機中獲取深度貼圖, 儲存在某張 RenderTexture 中, 然后在主相機中每個片元都獲取對應的世界坐標位置, 將世界坐標轉換到觀察者相機viewPort中, 如果在觀察者相機視野內, 就繼續把視口坐標轉換為UV坐標然后取出儲存在 RenderTexture 中的對應深度, 把深度轉換為實際深度值后做比較, 如果深度小於等於深度圖的深度, 就是觀察者可見 否則就是不可見了.
先來看怎樣獲取深度圖吧, 不管哪種渲染方式, 給相機添加獲取深度圖標記即可:
_cam = GetComponent<Camera>();
_cam.depthTextureMode |= DepthTextureMode.Depth;
然后深度圖要精度高點的, 單通道圖即可:
renderTexture = RenderTexture.GetTemporary(Screen.width, Screen.height, 0, RenderTextureFormat.RFloat); renderTexture.hideFlags = HideFlags.DontSave;
之后就直接把相機的貼圖經過后處理把深度圖渲染出來即可 :
private void OnRenderImage(RenderTexture source, RenderTexture destination) { if (_material && _cam) { Shader.SetGlobalFloat("_cameraNear", _cam.nearClipPlane); Shader.SetGlobalFloat("_cameraFar", _cam.farClipPlane); Graphics.Blit(source, renderTexture, _material); } Graphics.Blit(source, destination); }
材質就是一個簡單的獲取深度的shader :
sampler2D _CameraDepthTexture; uniform float _cameraFar; uniform float _cameraNear; float DistanceToLinearDepth(float d, float near, float far) { float z = (d - near) / (far - near); return z; } fixed4 frag(v2f i) : SV_Target { float depth = SAMPLE_DEPTH_TEXTURE(_CameraDepthTexture, i.uv); depth = LinearEyeDepth(depth); depth = DistanceToLinearDepth(depth, _cameraNear, _cameraFar); return float4(depth, depth, depth, 1); }
這里有點奇怪吧, 為什么返回的不是深度貼圖的深度, 也不是 Linear01Depth(depth) 的歸一化深度, 而是自己計算出來的一個深度?
這是因為缺少官方文檔........其實看API里面有直接提供 RenderBuffer 給相機的方法 :
_cam.SetTargetBuffers(renderTexture.colorBuffer, renderTexture.depthBuffer);
可是沒有文檔也沒有例子啊, 鬼知道你渲染出來我怎么用啊, 還有 renderTexture.depthBuffer 到底怎樣作為 Texture 傳給 shader 啊... 我之前嘗試過它們之間通過 IntPtr 作的轉換操作, 都是失敗...
於是就只能用最穩妥的方法, 通過 Shader 渲染一張深度圖出來吧, 然后通過 SAMPLE_DEPTH_TEXTURE(_CameraDepthTexture, i.uv); 獲取到的深度值, 應該是 NDC 坐標, 深度值的范圍應該是[0, 1]之間吧(非線性), 如果在其它相機過程中獲取實際深度的話, 就需要自己實現 LinearEyeDepth(float z) 這個方法:
inline float LinearEyeDepth( float z ) { return 1.0 / (_ZBufferParams.z * z + _ZBufferParams.w); }
而在不同平台, _ZBufferParams 里面的值是不一樣的, 所以如果實現它的話會很麻煩, 並且經過我的測試, 結果是不對的......
double x, y; OpenGL would be this: x = (1.0 - m_FarClip / m_NearClip) / 2.0; y = (1.0 + m_FarClip / m_NearClip) / 2.0; D3D is this: x = 1.0 - m_FarClip / m_NearClip; y = m_FarClip / m_NearClip; _ZBufferParams = float4(x, y, x/m_FarClip, y/m_FarClip);
PS : 使用觀察者相機的 nearClipPlane, farClipPlane 計算出來的深度也是不正確的, 不知道為什么......
所以就有了我自己計算歸一化深度的方法了:
float DistanceToLinearDepth(float d, float near, float far) { float z = (d - near) / (far - near); return z; }
簡單好理解, 這樣就獲得了我自己計算的深度圖, 只需要觀察者相機的 far/near clip panel 的值即可. 所以可以看出, 由於缺少官方文檔等原因, 做一個簡單的功能往往會很花費時間, 說真的搜索了很多例子, 直接代碼Shader復制過來都不能用, 簡直了.
接下來是難點的主相機渲染, 主相機需要做這幾件事 :
1. 通過主相機深度圖, 獲取當前片元的世界坐標位置
2. 通過把世界坐標位置轉換到觀察者相機視口空間, 檢測是否在觀察者相機視口之內.
3. 不在觀察者相機視口之內, 正常顯示.
4. 在觀察者相機視口之內的部分, 將視口坐標轉換成觀察者相機的 UV 坐標, 然后對觀察者相機渲染出來的深度圖進行取值, 變換為實際深度值然后進行深度比較, 如果片元在觀察者相機視口的坐標深度小於等於深度圖深度, 則是可見部分, 其它是不可見部分.
主相機也設置打開深度圖渲染 :
_cam = GetComponent<Camera>();
_cam.depthTextureMode |= DepthTextureMode.Depth;
那么怎樣通過片元獲取世界坐標位置呢? 最穩妥的是參考官方 Fog 相關 Shader, 因為使用的是后處理, 是無法在頂點階段計算世界坐標位置后通過插值在片元階段獲得片元坐標位置的, 我們可以通過視錐的4個射線配合深度圖來進行世界坐標還原. 下圖表示了視錐體 :
后處理的頂點過程可以看成是渲染 ABCD 組成的近裁面的過程, 它只有4個頂點, 如果我們可以將OA, OB, OC, OD 四條射線存入頂點程序, 那么在片元階段就能獲得自動插值的相機到片元對應的近裁面的射線, 因為深度值 z 是一個非透視值( 也就是說透視投影矩陣雖然變換了z值, 可是z值直接保留到了w中, 其實是沒有被變換的意思 ), 而 x, y 是經過透視變換的, 那么剛好四個射線方向通過GPU插值就符合逆透視變換了( Z在向量插值中不變 ).
假設 ABCD 面現在不是近裁面, 而是離 O 點距離為1的裁面, 那么 ABCD 的中心 M 點跟 O 點的射線 OM 的長度就是1, 那么 O 點片元代表的位置就可以簡單計算出來了 :
float3 interpolatedRay = OM 向量(世界坐標系) float linearDepth = LinearEyeDepth(SAMPLE_DEPTH_TEXTURE(_CameraDepthTexture, float2(0.5, 0.5))); // 0.5是屏幕中點 float3 wpos = _WorldSpaceCameraPos + linearDepth * interpolatedRay.xyz;
使用相機位置 + OM射線方向 * 深度 就可以算出來了. 那么其他片元怎么計算呢? 使用 OA, OB, OC, OD 向量讓 GPU 自動插值計算即可, 其實 OM 向量就是它們的插值結果, 計算這四個向量也有幾種方法, 我試了在頂點階段通過判斷頂點Index來計算, 不過似乎結果不正確, 就直接在 C# 中計算了(老版本的Fog Shader就是這樣計算的) :
Matrix4x4 frustumCorners = Matrix4x4.identity; //先計算近裁剪平面的四個角對應的向量 float fov = _cam.fieldOfView; float near = _cam.nearClipPlane; float aspect = _cam.aspect; var cameraTransform = _cam.transform; float halfHeight = near * Mathf.Tan(fov * 0.5f * Mathf.Deg2Rad); Vector3 toRight = cameraTransform.right * halfHeight * aspect; Vector3 toTop = cameraTransform.up * halfHeight; Vector3 topLeft = cameraTransform.forward * near + toTop - toRight; float scale = topLeft.magnitude / near; topLeft.Normalize(); topLeft *= scale; Vector3 topRight = cameraTransform.forward * near + toRight + toTop; topRight.Normalize(); topRight *= scale; Vector3 bottomLeft = cameraTransform.forward * near - toTop - toRight; bottomLeft.Normalize(); bottomLeft *= scale; Vector3 bottomRight = cameraTransform.forward * near + toRight - toTop; bottomRight.Normalize(); bottomRight *= scale; //將4個向量存儲在矩陣類型的frustumCorners 中 frustumCorners.SetRow(0, bottomLeft); frustumCorners.SetRow(1, bottomRight); frustumCorners.SetRow(2, topRight); frustumCorners.SetRow(3, topLeft); Shader.SetGlobalMatrix("_FrustumCornersRay", frustumCorners);
這樣就把 OA, OB, OC, OD 傳入 Shader 了, 只要在頂點階段根據 UV 的象限取出對應的向量來就行了. 我們可以獲取片元對應的世界坐標之后, 就要把世界坐標轉換到觀察者相機的視口之內去, 看看它是不是在觀察者相機視野內 :
把觀察者相機的矩陣以及參數傳入Shader
Shader.SetGlobalFloat("_cameraNear", _cam.nearClipPlane); Shader.SetGlobalFloat("_cameraFar", _cam.farClipPlane); Shader.SetGlobalMatrix("_DepthCameraWorldToLocalMatrix", _cam.worldToCameraMatrix); Shader.SetGlobalMatrix("_DepthCameraProjectionMatrix", _cam.projectionMatrix);
轉換坐標到觀察者視口( wpos 就是前面計算出來的世界坐標 ) :
// 透視投影之后的坐標轉換成NDC坐標, 再轉換成視口坐標 float2 ProjPosToViewPortPos(float4 projPos) { float3 ndcPos = projPos.xyz / projPos.w; float2 viewPortPos = float2(0.5f * ndcPos.x + 0.5f, 0.5f * ndcPos.y + 0.5f); return viewPortPos; }
float3 viewPosInDepthCamera = mul(_DepthCameraWorldToLocalMatrix, float4(wpos, 1)).xyz; float4 projPosInDepthCamera = mul(_DepthCameraProjectionMatrix, float4(viewPosInDepthCamera, 1)); float2 viewPortPosInDepthCamera = ProjPosToViewPortPos(projPosInDepthCamera); float depthCameraViewZ = -viewPosInDepthCamera.z;
判斷是否在觀察者視野內, 不在視野內的直接返回原值 :
if (viewPortPosInDepthCamera.x < 0 || viewPortPosInDepthCamera.x > 1) { return col; } if (viewPortPosInDepthCamera.y < 0 || viewPortPosInDepthCamera.y > 1) { return col; } if (depthCameraViewZ< _cameraNear || depthCameraViewZ > _cameraFar) { return col; }
在視野內的, 就需要從觀察者相機的深度圖獲取深度進行深度對比 :
Shader.SetGlobalTexture("_ObserverDepthTexture", getDepthTexture.renderTexture); // 把深度圖傳入主相機Shader
float2 depthCameraUV = viewPortPosInDepthCamera.xy; #if UNITY_UV_STARTS_AT_TOP if (_MainTex_TexelSize.y < 0) { depthCameraUV.y = 1 - depthCameraUV.y; } #endif float observer01Depth = tex2D(_ObserverDepthTexture, depthCameraUV).r; float observerEyeDepth = LinearDepthToDistance(observer01Depth, _cameraNear, _cameraFar); float4 finalCol = col * 0.8; float4 visibleColor = _visibleColor * 0.4f; float4 unVisibleColor = _unVisibleColor * 0.4f; // depth check if (depthCameraViewZ <= observerEyeDepth) { return finalCol + visibleColor; } else { // bias check if (abs(depthCameraViewZ - observerEyeDepth) <= _FieldBias) { return finalCol + visibleColor; } else { return finalCol + unVisibleColor; } }
深度貼圖是我們自己計算的01深度值, 這里解壓深度貼圖也是使用觀察者相機的 nearClipPlane, farClipPlane 直接計算得到的, 最基本的功能就完全實現了, 那些一大堆 if else 就由他去吧, 永遠習慣不了着色語言的寫法.
那么說讓人頭大的地方在哪呢? 上面過程中也出現過了, 包括 C# 那邊提供的 API 沒有文檔不會用, 深度圖解壓函數自己實現會出錯( 完全按照官方人員論壇回復來寫的 ), 在頂點階段自己計算視錐射線出錯( 又難測試又難找資料 ), 還有上文中的宏 UNITY_UV_STARTS_AT_TOP 這些我哪知道什么時候要加什么時候不用加啊, 甚至 C# 中的 GL 函數還有 GL.GetGPUProjectionMatrix() 這樣的功能.
其實最希望引擎能實現的是我告訴它我希望用那種坐標系或者用哪種標准來編程, 這樣我就使用習慣的寫法就行了啊, 比如屏幕0,0位置我習慣左上角, 你習慣左下角這樣的. 就不需要每個人都對這些宏熟悉才能寫個Shader. 我覺得都是比較習慣D3D標准的吧.
Shader.SetGlobalStandard("D3D"); // DX標准 Shader.SetGlobalStandard("OPENGL"); // OpenGL -- 提供全局設置和單獨設置的功能的話
最后做出來的只是基本功能, 並沒有對距離深度做樣本估計, 所以會像硬陰影一樣有鋸齒邊, 還有像沒開各項異性的貼圖一樣有撕扯感, 解決這些問題的方法跟 ShadowMap 一樣一樣的.
ShadowMap 我在 Unity4.x 的時候就試着寫過了, 用的是 VSM 做的軟陰影, 現在寫這個感覺還是跟那年一樣, 沒有便利性上的提升......

using System.Collections; using System.Collections.Generic; using UnityEngine; [RequireComponent(typeof(Camera))] public class GetDepthTexture : MonoBehaviour { public RenderTexture renderTexture; public Color visibleCol = Color.green; public Color unVisibleCol = Color.red; private Material _material; public Camera _cam { get; private set; } public LineRenderer lineRendererTemplate; void Start() { _cam = GetComponent<Camera>(); _cam.depthTextureMode |= DepthTextureMode.Depth; renderTexture = RenderTexture.GetTemporary(Screen.width, Screen.height, 0, RenderTextureFormat.RFloat); renderTexture.hideFlags = HideFlags.DontSave; _material = new Material(Shader.Find("Custom/GetDepthTexture")); _cam.SetTargetBuffers(renderTexture.colorBuffer, renderTexture.depthBuffer); if(lineRendererTemplate) { var dirs = VisualFieldRenderer.GetFrustumCorners(_cam); for(int i = 0; i < 4; i++) { Vector3 dir = dirs.GetRow(i); var tagPoint = _cam.transform.position + (dir * _cam.farClipPlane); var newLine = GameObject.Instantiate(lineRendererTemplate.gameObject).GetComponent<LineRenderer>(); newLine.useWorldSpace = false; newLine.transform.SetParent(_cam.transform, false); newLine.transform.localPosition = Vector3.zero; newLine.transform.localRotation = Quaternion.identity; newLine.SetPositions(new Vector3[2] { Vector3.zero, _cam.transform.worldToLocalMatrix.MultiplyPoint(tagPoint) }); } } } private void OnDestroy() { if(renderTexture) { RenderTexture.ReleaseTemporary(renderTexture); } if(_cam) { _cam.targetTexture = null; } } private void OnRenderImage(RenderTexture source, RenderTexture destination) { if(_material && _cam) { Shader.SetGlobalColor("_visibleColor", visibleCol); Shader.SetGlobalColor("_unVisibleColor", unVisibleCol); Shader.SetGlobalFloat("_cameraNear", _cam.nearClipPlane); Shader.SetGlobalFloat("_cameraFar", _cam.farClipPlane); Shader.SetGlobalMatrix("_DepthCameraWorldToLocalMatrix", _cam.worldToCameraMatrix); Shader.SetGlobalMatrix("_DepthCameraProjectionMatrix", _cam.projectionMatrix); Graphics.Blit(source, renderTexture, _material); } Graphics.Blit(source, destination); } }

Shader "Custom/GetDepthTexture" { Properties { _MainTex("Texture", 2D) = "white" {} } CGINCLUDE #include "UnityCG.cginc" struct appdata { float4 vertex : POSITION; float2 uv : TEXCOORD0; }; struct v2f { float2 uv : TEXCOORD0; float4 vertex : SV_POSITION; }; sampler2D _CameraDepthTexture; uniform float _cameraFar; uniform float _cameraNear; float DistanceToLinearDepth(float d, float near, float far) { float z = (d - near) / (far - near); return z; } v2f vert(appdata v) { v2f o; o.vertex = UnityObjectToClipPos(v.vertex); o.uv = v.uv; return o; } fixed4 frag(v2f i) : SV_Target { float depth = SAMPLE_DEPTH_TEXTURE(_CameraDepthTexture, i.uv); depth = LinearEyeDepth(depth); depth = DistanceToLinearDepth(depth, _cameraNear, _cameraFar); return float4(depth, depth, depth, 1); } ENDCG SubShader { // No culling or depth Cull Off ZWrite Off ZTest Always //Pass 0 Roberts Operator Pass { CGPROGRAM #pragma vertex vert #pragma fragment frag ENDCG } } }

using System.Collections; using System.Collections.Generic; using UnityEngine; [RequireComponent(typeof(Camera))] public class VisualFieldRenderer : MonoBehaviour { public GetDepthTexture getDepthTexture; [SerializeField] [Range(0.001f, 1f)] public float fieldCheckBias = 0.1f; private Material _material; private Camera _cam; void Start() { _cam = GetComponent<Camera>(); _cam.depthTextureMode |= DepthTextureMode.Depth; _material = new Material(Shader.Find("Custom/VisualFieldRenderer")); } private void OnRenderImage(RenderTexture source, RenderTexture destination) { if (_cam && getDepthTexture && getDepthTexture.renderTexture && _material) { if (getDepthTexture._cam) { if (getDepthTexture._cam.farClipPlane > _cam.farClipPlane - 500) { _cam.farClipPlane = getDepthTexture._cam.farClipPlane + 500; } } Matrix4x4 frustumCorners = GetFrustumCorners(_cam); Shader.SetGlobalMatrix("_FrustumCornersRay", frustumCorners); Shader.SetGlobalTexture("_ObserverDepthTexture", getDepthTexture.renderTexture); Shader.SetGlobalFloat("_FieldBias", fieldCheckBias); Graphics.Blit(source, destination, _material); } else { Graphics.Blit(source, destination); } } public static Matrix4x4 GetFrustumCorners(Camera cam) { Matrix4x4 frustumCorners = Matrix4x4.identity; float fov = cam.fieldOfView; float near = cam.nearClipPlane; float aspect = cam.aspect; var cameraTransform = cam.transform; float halfHeight = near * Mathf.Tan(fov * 0.5f * Mathf.Deg2Rad); Vector3 toRight = cameraTransform.right * halfHeight * aspect; Vector3 toTop = cameraTransform.up * halfHeight; Vector3 topLeft = cameraTransform.forward * near + toTop - toRight; float scale = topLeft.magnitude / near; topLeft.Normalize(); topLeft *= scale; Vector3 topRight = cameraTransform.forward * near + toRight + toTop; topRight.Normalize(); topRight *= scale; Vector3 bottomLeft = cameraTransform.forward * near - toTop - toRight; bottomLeft.Normalize(); bottomLeft *= scale; Vector3 bottomRight = cameraTransform.forward * near + toRight - toTop; bottomRight.Normalize(); bottomRight *= scale; frustumCorners.SetRow(0, bottomLeft); frustumCorners.SetRow(1, bottomRight); frustumCorners.SetRow(2, topRight); frustumCorners.SetRow(3, topLeft); return frustumCorners; } }

// Upgrade NOTE: replaced '_CameraToWorld' with 'unity_CameraToWorld' Shader "Custom/VisualFieldRenderer" { Properties { _MainTex("Texture", 2D) = "white" {} } CGINCLUDE #include "UnityCG.cginc" struct appdata { float4 vertex : POSITION; float2 uv : TEXCOORD0; }; struct v2f { float2 uv : TEXCOORD0; float4 vertex : SV_POSITION; float2 uv_depth : TEXCOORD1; float4 interpolatedRay : TEXCOORD2; }; sampler2D _MainTex; half4 _MainTex_TexelSize; sampler2D _CameraDepthTexture; uniform float _FieldBias; uniform sampler2D _ObserverDepthTexture; uniform float4x4 _DepthCameraWorldToLocalMatrix; uniform float4x4 _DepthCameraProjectionMatrix; uniform float _cameraFar; uniform float _cameraNear; uniform float4x4 _FrustumCornersRay; float4 _unVisibleColor; float4 _visibleColor; float2 ProjPosToViewPortPos(float4 projPos) { float3 ndcPos = projPos.xyz / projPos.w; float2 viewPortPos = float2(0.5f * ndcPos.x + 0.5f, 0.5f * ndcPos.y + 0.5f); return viewPortPos; } float LinearDepthToDistance(float z, float near, float far) { float d = z * (far - near) + near; return d; } //頂點階段 v2f vert(appdata v) { v2f o; o.vertex = UnityObjectToClipPos(v.vertex); // MVP matrix screen pos o.uv = v.uv; o.uv_depth = v.uv; #if UNITY_UV_STARTS_AT_TOP if (_MainTex_TexelSize.y < 0) { o.uv_depth.y = 1 - o.uv_depth.y; } #endif int index = 0; if (v.uv.x < 0.5 && v.uv.y < 0.5) { index = 0; } else if (v.uv.x > 0.5 && v.uv.y < 0.5) { index = 1; } else if (v.uv.x > 0.5 && v.uv.y > 0.5) { index = 2; } else { index = 3; } o.interpolatedRay = _FrustumCornersRay[index]; return o; } //片元階段 fixed4 frag(v2f i) : SV_Target { float4 col = tex2D(_MainTex, i.uv); float z = SAMPLE_DEPTH_TEXTURE(_CameraDepthTexture, i.uv); float depth = Linear01Depth(z); if (depth >= 1) { return col; } // correct in Main Camera float linearDepth = LinearEyeDepth(SAMPLE_DEPTH_TEXTURE(_CameraDepthTexture, i.uv_depth)); float3 wpos = _WorldSpaceCameraPos + linearDepth * i.interpolatedRay.xyz; // correct in Depth Camera float3 viewPosInDepthCamera = mul(_DepthCameraWorldToLocalMatrix, float4(wpos, 1)).xyz; float4 projPosInDepthCamera = mul(_DepthCameraProjectionMatrix, float4(viewPosInDepthCamera, 1)); float2 viewPortPosInDepthCamera = ProjPosToViewPortPos(projPosInDepthCamera); float depthCameraViewZ = -viewPosInDepthCamera.z; if (viewPortPosInDepthCamera.x < 0 || viewPortPosInDepthCamera.x > 1) { return col; } if (viewPortPosInDepthCamera.y < 0 || viewPortPosInDepthCamera.y > 1) { return col; } if (depthCameraViewZ <= _cameraNear || depthCameraViewZ >= _cameraFar) { return col; } // correct in depth camera float2 depthCameraUV = viewPortPosInDepthCamera.xy; #if UNITY_UV_STARTS_AT_TOP if (_MainTex_TexelSize.y < 0) { depthCameraUV.y = 1 - depthCameraUV.y; } #endif float observer01Depth = tex2D(_ObserverDepthTexture, depthCameraUV).r; float observerEyeDepth = LinearDepthToDistance(observer01Depth, _cameraNear, _cameraFar); float4 finalCol = col * 0.8; float4 visibleColor = _visibleColor * 0.4f; float4 unVisibleColor = _unVisibleColor * 0.4f; // depth check if (depthCameraViewZ <= observerEyeDepth) { return finalCol + visibleColor; } else { // bias check if (abs(depthCameraViewZ - observerEyeDepth) <= _FieldBias) { return finalCol + visibleColor; } else { return finalCol + unVisibleColor; } } } ENDCG SubShader { // No culling or depth Cull Off ZWrite Off ZTest Always //Pass 0 Roberts Operator Pass { CGPROGRAM #pragma vertex vert #pragma fragment frag ENDCG } } }
2019.12.02 添加VSM進行濾波
因為VSM是最容易實現的, 切比雪夫不等式應該是最容易理解的, 測試一下能否改善一下效果. 一些功能代碼在 https://www.cnblogs.com/tiancaiwrk/p/11957545.html
單邊形式的不等式 :
1. 在單邊式中, 當t > u 時, 有原始數據 x >= t 的概率小於右邊等式. 那么計算出來的就是當前像素在觀察者相機中被遮擋的概率了.在單邊式中, 當t > u 時, 有原始數據 x >= t 的概率小於右邊等式. 那么計算出來的就是當前像素在觀察者相機中被遮擋的概率了.
2. 把深度作為數據組, 深度的平方作為中間計算數據, 存進雙通道貼圖中即可, 再通過一次 Blur 作為求取平均值的過程, 就得到后續的計算數據了.
因為方差可以根據 平方的均值 - 均值的平方 計算 :
3. 將式子代入 (5) 中, t 就直接用片元在觀察者相機中的實際深度就可以了.
總代碼 :

using UnityEngine; public static class GLHelper { public static void Blit(Texture source, Material material, RenderTexture destination, int materialPass = 0) { if(material.SetPass(materialPass)) { material.mainTexture = source; Graphics.SetRenderTarget(destination); GL.PushMatrix(); GL.LoadOrtho(); GL.Begin(GL.QUADS); { Vector3 coords = new Vector3(0, 0, 0); GL.TexCoord(coords); GL.Vertex(coords); coords = new Vector3(1, 0, 0); GL.TexCoord(coords); GL.Vertex(coords); coords = new Vector3(1, 1, 0); GL.TexCoord(coords); GL.Vertex(coords); coords = new Vector3(0, 1, 0); GL.TexCoord(coords); GL.Vertex(coords); } GL.End(); GL.PopMatrix(); } } public static void CopyTexture(RenderTexture from, Texture2D to) { if(from && to) { if((SystemInfo.copyTextureSupport & UnityEngine.Rendering.CopyTextureSupport.RTToTexture) != 0) { Graphics.CopyTexture(from, to); } else { var current = RenderTexture.active; RenderTexture.active = from; to.ReadPixels(new Rect(0, 0, from.width, from.height), 0, 0); to.Apply(); RenderTexture.active = current; } } } #region Blur Imp public static Texture2D DoBlur(Texture tex) { if(tex) { var material = new Material(Shader.Find("Custom/SimpleBlur")); material.mainTexture = tex; var tempRenderTexture1 = RenderTexture.GetTemporary(tex.width, tex.height, 0, RenderTextureFormat.ARGB32); material.SetVector("_offset", Vector2.up); GLHelper.Blit(tex, material, tempRenderTexture1, 0); var tempRenderTexture2 = RenderTexture.GetTemporary(tex.width, tex.height, 0, RenderTextureFormat.ARGB32); material.SetVector("_offset", Vector2.right); GLHelper.Blit(tempRenderTexture1, material, tempRenderTexture2, 0); var retTexture = new Texture2D(tex.width, tex.height, TextureFormat.ARGB32, false, true); GLHelper.CopyTexture(tempRenderTexture2, retTexture); RenderTexture.ReleaseTemporary(tempRenderTexture1); RenderTexture.ReleaseTemporary(tempRenderTexture2); return retTexture; } return null; } public static bool DoBlur(Texture tex, RenderTexture renderTexture) { if(tex) { var material = new Material(Shader.Find("Custom/SimpleBlur")); material.mainTexture = tex; var tempRenderTexture = RenderTexture.GetTemporary(tex.width, tex.height, 0, RenderTextureFormat.ARGB32); material.SetVector("_offset", Vector2.up); GLHelper.Blit(tex, material, tempRenderTexture, 0); material.SetVector("_offset", Vector2.right); GLHelper.Blit(tempRenderTexture, material, renderTexture, 0); RenderTexture.ReleaseTemporary(tempRenderTexture); return true; } return false; } #endregion }

Shader "Custom/SimpleBlur" { Properties { _MainTex ("Texture", 2D) = "white" {} } SubShader { // No culling or depth Cull Off ZWrite Off ZTest Always Pass { CGPROGRAM #pragma vertex vert #pragma fragment frag #include "UnityCG.cginc" struct appdata { float4 vertex : POSITION; float2 uv : TEXCOORD0; }; struct v2f { float2 uv : TEXCOORD0; float4 vertex : SV_POSITION; }; sampler2D _MainTex; float4 _MainTex_TexelSize; uniform float2 _offset; v2f vert (appdata v) { v2f o; o.vertex = UnityObjectToClipPos(v.vertex); o.uv = v.uv; #if UNITY_UV_STARTS_AT_TOP if (_MainTex_TexelSize.y < 0) { o.uv.y = 1 - o.uv.y; } #endif return o; } fixed4 frag (v2f i) : SV_Target { float4 col = half4(0, 0, 0, 0); // marco for more faster #define PIXELBLUR(weight, kernel_offset) tex2D(_MainTex, float2(i.uv.xy + _offset * _MainTex_TexelSize.xy * kernel_offset)) * weight col += PIXELBLUR(0.05, -4.0); col += PIXELBLUR(0.09, -3.0); col += PIXELBLUR(0.12, -2.0); col += PIXELBLUR(0.15, -1.0); col += PIXELBLUR(0.18, 0.0); col += PIXELBLUR(0.15, +1.0); col += PIXELBLUR(0.12, +2.0); col += PIXELBLUR(0.09, +3.0); col += PIXELBLUR(0.05, +4.0); return col; } ENDCG } } }

using System.Collections; using System.Collections.Generic; using UnityEngine; [RequireComponent(typeof(Camera))] public class GetDepthTexture_VSM : MonoBehaviour { public RenderTexture depthRenderTexture_RG; public RenderTexture depthRenderTexture_VSM; public Color visibleCol = Color.green; public Color unVisibleCol = Color.red; private Material _material; public Camera _cam { get; private set; } public LineRenderer lineRendererTemplate; void Start() { _cam = GetComponent<Camera>(); _cam.depthTextureMode |= DepthTextureMode.Depth; depthRenderTexture_RG = RenderTexture.GetTemporary(Screen.width, Screen.height, 0, RenderTextureFormat.RGFloat, RenderTextureReadWrite.Default, 2); depthRenderTexture_RG.hideFlags = HideFlags.DontSave; depthRenderTexture_VSM = RenderTexture.GetTemporary(Screen.width, Screen.height, 0, RenderTextureFormat.RGFloat, RenderTextureReadWrite.Default, 2); depthRenderTexture_VSM.hideFlags = HideFlags.DontSave; _material = new Material(Shader.Find("Custom/GetDepthTexture_VSM")); if(lineRendererTemplate) { var dirs = VisualFieldRenderer.GetFrustumCorners(_cam); for(int i = 0; i < 4; i++) { Vector3 dir = dirs.GetRow(i); var tagPoint = _cam.transform.position + (dir * _cam.farClipPlane); var newLine = GameObject.Instantiate(lineRendererTemplate.gameObject).GetComponent<LineRenderer>(); newLine.useWorldSpace = false; newLine.transform.SetParent(_cam.transform, false); newLine.transform.localPosition = Vector3.zero; newLine.transform.localRotation = Quaternion.identity; newLine.SetPositions(new Vector3[2] { Vector3.zero, _cam.transform.worldToLocalMatrix.MultiplyPoint(tagPoint) }); } } } private void OnDestroy() { if(depthRenderTexture_RG) { RenderTexture.ReleaseTemporary(depthRenderTexture_RG); } if(_cam) { _cam.targetTexture = null; } } private void OnRenderImage(RenderTexture source, RenderTexture destination) { if(_material && _cam) { Shader.SetGlobalColor("_visibleColor", visibleCol); Shader.SetGlobalColor("_unVisibleColor", unVisibleCol); Shader.SetGlobalFloat("_cameraNear", _cam.nearClipPlane); Shader.SetGlobalFloat("_cameraFar", _cam.farClipPlane); Shader.SetGlobalMatrix("_DepthCameraWorldToLocalMatrix", _cam.worldToCameraMatrix); Shader.SetGlobalMatrix("_DepthCameraProjectionMatrix", _cam.projectionMatrix); Graphics.Blit(source, depthRenderTexture_RG, _material); GLHelper.DoBlur(depthRenderTexture_RG, depthRenderTexture_VSM); } Graphics.Blit(source, destination); } }

Shader "Custom/GetDepthTexture_VSM" { Properties { _MainTex("Texture", 2D) = "white" {} } CGINCLUDE #include "UnityCG.cginc" struct appdata { float4 vertex : POSITION; float2 uv : TEXCOORD0; }; struct v2f { float2 uv : TEXCOORD0; float4 vertex : SV_POSITION; }; sampler2D _CameraDepthTexture; uniform float _cameraFar; uniform float _cameraNear; float DistanceToLinearDepth(float d, float near, float far) { float z = (d - near) / (far - near); return z; } v2f vert(appdata v) { v2f o; o.vertex = UnityObjectToClipPos(v.vertex); o.uv = v.uv; return o; } fixed4 frag(v2f i) : SV_Target { float depth = SAMPLE_DEPTH_TEXTURE(_CameraDepthTexture, i.uv); depth = DistanceToLinearDepth(LinearEyeDepth(depth), _cameraNear, _cameraFar); return float4(depth, depth * depth, 0, 1); } ENDCG SubShader { // No culling or depth Cull Off ZWrite Off ZTest Always //Pass 0 Roberts Operator Pass { CGPROGRAM #pragma vertex vert #pragma fragment frag ENDCG } } }

using System.Collections; using System.Collections.Generic; using UnityEngine; [RequireComponent(typeof(Camera))] public class VisualFieldRenderer_VSM : MonoBehaviour { [SerializeField] public GetDepthTexture_VSM getDepthTexture; [SerializeField] [Range(0.001f, 1f)] public float fieldCheckBias = 0.1f; private Material _material; private Camera _cam; void Start() { _cam = GetComponent<Camera>(); _cam.depthTextureMode |= DepthTextureMode.Depth; _material = new Material(Shader.Find("Custom/VisualFieldRenderer_VSM")); } private void OnRenderImage(RenderTexture source, RenderTexture destination) { if(_cam && getDepthTexture && getDepthTexture.depthRenderTexture_VSM && _material) { if(getDepthTexture._cam) { if(getDepthTexture._cam.farClipPlane > _cam.farClipPlane - 500) { _cam.farClipPlane = getDepthTexture._cam.farClipPlane + 500; } } Matrix4x4 frustumCorners = GetFrustumCorners(_cam); Shader.SetGlobalMatrix("_FrustumCornersRay", frustumCorners); Shader.SetGlobalTexture("_ObserverDepthTexture", getDepthTexture.depthRenderTexture_VSM); Shader.SetGlobalFloat("_FieldBias", fieldCheckBias); Graphics.Blit(source, destination, _material); } else { Graphics.Blit(source, destination); } } public static Matrix4x4 GetFrustumCorners(Camera cam) { Matrix4x4 frustumCorners = Matrix4x4.identity; float fov = cam.fieldOfView; float near = cam.nearClipPlane; float aspect = cam.aspect; var cameraTransform = cam.transform; float halfHeight = near * Mathf.Tan(fov * 0.5f * Mathf.Deg2Rad); Vector3 toRight = cameraTransform.right * halfHeight * aspect; Vector3 toTop = cameraTransform.up * halfHeight; Vector3 topLeft = cameraTransform.forward * near + toTop - toRight; float scale = topLeft.magnitude / near; topLeft.Normalize(); topLeft *= scale; Vector3 topRight = cameraTransform.forward * near + toRight + toTop; topRight.Normalize(); topRight *= scale; Vector3 bottomLeft = cameraTransform.forward * near - toTop - toRight; bottomLeft.Normalize(); bottomLeft *= scale; Vector3 bottomRight = cameraTransform.forward * near + toRight - toTop; bottomRight.Normalize(); bottomRight *= scale; frustumCorners.SetRow(0, bottomLeft); frustumCorners.SetRow(1, bottomRight); frustumCorners.SetRow(2, topRight); frustumCorners.SetRow(3, topLeft); return frustumCorners; } }

// Upgrade NOTE: replaced '_CameraToWorld' with 'unity_CameraToWorld' Shader "Custom/VisualFieldRenderer_VSM" { Properties { _MainTex("Texture", 2D) = "white" {} } CGINCLUDE #include "UnityCG.cginc" struct appdata { float4 vertex : POSITION; float2 uv : TEXCOORD0; }; struct v2f { float2 uv : TEXCOORD0; float4 vertex : SV_POSITION; float2 uv_depth : TEXCOORD1; float4 interpolatedRay : TEXCOORD2; }; sampler2D _MainTex; half4 _MainTex_TexelSize; sampler2D _CameraDepthTexture; uniform float _FieldBias; uniform sampler2D _ObserverDepthTexture; uniform float4x4 _DepthCameraWorldToLocalMatrix; uniform float4x4 _DepthCameraProjectionMatrix; uniform float _cameraFar; uniform float _cameraNear; uniform float4x4 _FrustumCornersRay; float4 _unVisibleColor; float4 _visibleColor; float2 ProjPosToViewPortPos(float4 projPos) { float3 ndcPos = projPos.xyz / projPos.w; float2 viewPortPos = float2(0.5f * ndcPos.x + 0.5f, 0.5f * ndcPos.y + 0.5f); return viewPortPos; } float LinearDepthToDistance(float z, float near, float far) { float d = z * (far - near) + near; return d; } float LinearDepthToDistanceSqrt(float z, float near, float far) { float d = sqrt(z) * (far - near) + near; return d * d; } //頂點階段 v2f vert(appdata v) { v2f o; o.vertex = UnityObjectToClipPos(v.vertex); // MVP matrix screen pos o.uv = v.uv; o.uv_depth = v.uv; #if UNITY_UV_STARTS_AT_TOP if (_MainTex_TexelSize.y < 0) { o.uv_depth.y = 1 - o.uv_depth.y; } #endif int index = 0; if (v.uv.x < 0.5 && v.uv.y < 0.5) { index = 0; } else if (v.uv.x > 0.5 && v.uv.y < 0.5) { index = 1; } else if (v.uv.x > 0.5 && v.uv.y > 0.5) { index = 2; } else { index = 3; } o.interpolatedRay = _FrustumCornersRay[index]; return o; } //片元階段 fixed4 frag(v2f i) : SV_Target { float4 col = tex2D(_MainTex, i.uv); float z = SAMPLE_DEPTH_TEXTURE(_CameraDepthTexture, i.uv); float depth = Linear01Depth(z); if (depth >= 1) { return col; } // correct in Main Camera float linearDepth = LinearEyeDepth(SAMPLE_DEPTH_TEXTURE(_CameraDepthTexture, i.uv_depth)); float3 wpos = _WorldSpaceCameraPos + linearDepth * i.interpolatedRay.xyz; // correct in Depth Camera float3 viewPosInDepthCamera = mul(_DepthCameraWorldToLocalMatrix, float4(wpos, 1)).xyz; float4 projPosInDepthCamera = mul(_DepthCameraProjectionMatrix, float4(viewPosInDepthCamera, 1)); float2 viewPortPosInDepthCamera = ProjPosToViewPortPos(projPosInDepthCamera); float depthCameraViewZ = -viewPosInDepthCamera.z; if (viewPortPosInDepthCamera.x < 0 || viewPortPosInDepthCamera.x > 1) { return col; } if (viewPortPosInDepthCamera.y < 0 || viewPortPosInDepthCamera.y > 1) { return col; } if (depthCameraViewZ <= _cameraNear || depthCameraViewZ >= _cameraFar) { return col; } // correct in depth camera float2 depthCameraUV = viewPortPosInDepthCamera.xy; #if UNITY_UV_STARTS_AT_TOP if (_MainTex_TexelSize.y < 0) { depthCameraUV.y = 1 - depthCameraUV.y; } #endif float2 depthInfo = tex2D(_ObserverDepthTexture, depthCameraUV).rg; float observerEyeDepth = LinearDepthToDistance(depthInfo.r, _cameraNear, _cameraFar); float4 finalCol = col * 0.8; float4 visibleColor = _visibleColor * 0.4f; float4 unVisibleColor = _unVisibleColor * 0.4f; if (depthCameraViewZ <= observerEyeDepth) { return finalCol + visibleColor; } float var = LinearDepthToDistanceSqrt(depthInfo.g, _cameraNear, _cameraFar) - (observerEyeDepth * observerEyeDepth); float dis = depthCameraViewZ - observerEyeDepth; float ChebychevVal = var / (var + (dis * dis)); return finalCol + lerp(visibleColor, unVisibleColor, 1 - ChebychevVal); } ENDCG SubShader { // No culling or depth Cull Off ZWrite Off ZTest Always //Pass 0 Roberts Operator Pass { CGPROGRAM #pragma vertex vert #pragma fragment frag ENDCG } } }
這樣使用真實深度作為 t 變量的時候, 其實作為可視域的功能來說不算好, 因為它不像陰影一樣有一個邊緣漸變的過程, 而是應該是個二值化的過程, 看看效果對比 :
可以看到VSM在邊緣的地方有模糊, 並且在正方體邊緣也有明顯的縫隙, 這是因為在計算平均深度 (Blur) 的時候這些點周圍是跟無限遠點做的平均, 必然會得到比較大的平均數, 影響了真實計算.