這節我們來看一下URP下的LitShader。LitShader也是基於物理渲染的,很多方法和屬性看過默認管線PBR代碼的應該都會很熟悉,我們現在再過一遍,加深一下印象,同時疏通一下以前可能沒有掌握的地方。
先看Shader的Properties:
// Specular vs Metallic workflow [HideInInspector] _WorkflowMode("WorkflowMode", Float) = 1.0
工作流還是Specular和Metallic。說到這兩個流程的區別,其實筆者認為他們只是在不同輸入形式同樣的算法下產生同樣的結果。所以叫工作流,因為材質需要的貼圖產出流程是不一樣的。但是不同輸入的形式其實決定了可控制參數的多少和基於物理自定義效果的程度。
首先看看Metallic Workflow:Metallic工作流的輸入是五張貼圖(當然並不是每張貼圖都是必須的),分別是主紋理、法線、環境遮蔽、金屬度、自發光。
對比一下Specular Workflow:Specular工作流輸入的是還是五張貼圖:分別是主紋理、法線、環境遮蔽、高光貼圖、自發光。
通過對比我們發現兩個工作流唯一不同的輸入就是 金屬度貼圖vs高光貼圖 那么究竟這兩種輸入方式對於渲染效果有着什么影響呢(其實熟悉PBR的小伙伴都知道,借着URP的機會講講PBR~手動滑稽)?我們看完Lit的代碼就可以理解了。
下面的屬性和之前基本上一樣,相比之前不同的是沒有了DetailTexture、DetailNormal和DetailMask。其他屬性會在將shader計算中提到,這里先跳過。我們先看ForwardPass:
Name "ForwardLit" Tags{"LightMode" = "UniversalForward"}
這里的標簽就是當時我們講URP的ForwardRenderer時會執行ShaderPass的標簽,接下來時幾個基本指令的參數化:
Blend[_SrcBlend][_DstBlend]
ZWrite[_ZWrite]
Cull[_Cull]
混合模式、深度寫入、Cull模式都做了參數化,在Material的Inspector面板上變得可操作,這個我們以后做自定義的Shader時可以學習一下。
之后就是一些預編譯指令,我們后面都會講到,需要關注的是:
#include "LitInput.hlsl" #include "LitForwardPass.hlsl"
所有的Lit方法執行都在這兩個hlsl文件中。我們直接看LitForwardPass.hlsl,按照通常的習慣,首先找到頂點着色器:
1 Varyings LitPassVertex(Attributes input) 2 { 3 Varyings output = (Varyings)0; 4 5 UNITY_SETUP_INSTANCE_ID(input); 6 UNITY_TRANSFER_INSTANCE_ID(input, output); 7 UNITY_INITIALIZE_VERTEX_OUTPUT_STEREO(output); 8 9 VertexPositionInputs vertexInput = GetVertexPositionInputs(input.positionOS.xyz); 10 VertexNormalInputs normalInput = GetVertexNormalInputs(input.normalOS, input.tangentOS); 11 half3 viewDirWS = GetCameraPositionWS() - vertexInput.positionWS; 12 half3 vertexLight = VertexLighting(vertexInput.positionWS, normalInput.normalWS); 13 half fogFactor = ComputeFogFactor(vertexInput.positionCS.z); 14 15 output.uv = TRANSFORM_TEX(input.texcoord, _BaseMap); 16 17 #ifdef _NORMALMAP 18 output.normalWS = half4(normalInput.normalWS, viewDirWS.x); 19 output.tangentWS = half4(normalInput.tangentWS, viewDirWS.y); 20 output.bitangentWS = half4(normalInput.bitangentWS, viewDirWS.z); 21 #else 22 output.normalWS = NormalizeNormalPerVertex(normalInput.normalWS); 23 output.viewDirWS = viewDirWS; 24 #endif 25 26 OUTPUT_LIGHTMAP_UV(input.lightmapUV, unity_LightmapST, output.lightmapUV); 27 OUTPUT_SH(output.normalWS.xyz, output.vertexSH); 28 29 output.fogFactorAndVertexLight = half4(fogFactor, vertexLight); 30 31 #ifdef _ADDITIONAL_LIGHTS 32 output.positionWS = vertexInput.positionWS; 33 #endif 34 35 #if defined(_MAIN_LIGHT_SHADOWS) && !defined(_RECEIVE_SHADOWS_OFF) 36 output.shadowCoord = GetShadowCoord(vertexInput); 37 #endif 38 39 output.positionCS = vertexInput.positionCS; 40 41 return output; 42 }
代碼中我們看出頂點着色器主要輸出的信息有世界空間的法線、視線、切線、位置,裁剪空間的位置,光照貼圖的UV、球諧等。
陰影坐標的計算在下面這個函數:
float4 GetShadowCoord(VertexPositionInputs vertexInput) { #if SHADOWS_SCREEN return ComputeScreenPos(vertexInput.positionCS); #else return TransformWorldToShadowCoord(vertexInput.positionWS); #endif }
我們可以看到屏幕空間陰影的話返回的是屏幕坐標,否則返回的是光源投影空間的坐標(也稱陰影空間)。
片元函數如下:
half4 LitPassFragment(Varyings input) : SV_Target { UNITY_SETUP_INSTANCE_ID(input); UNITY_SETUP_STEREO_EYE_INDEX_POST_VERTEX(input); SurfaceData surfaceData; InitializeStandardLitSurfaceData(input.uv, surfaceData); InputData inputData; InitializeInputData(input, surfaceData.normalTS, inputData); half4 color = UniversalFragmentPBR(inputData, surfaceData.albedo, surfaceData.metallic, surfaceData.specular, surfaceData.smoothness, surfaceData.occlusion, surfaceData.emission, surfaceData.alpha); color.rgb = MixFog(color.rgb, inputData.fogCoord); return color; }
可以看到核心的三個方法:InitializeStandardLitSurfaceData、InitializeInputData、UniversalFragmentPBR
1 inline void InitializeStandardLitSurfaceData(float2 uv, out SurfaceData outSurfaceData) 2 { 3 half4 albedoAlpha = SampleAlbedoAlpha(uv, TEXTURE2D_ARGS(_BaseMap, sampler_BaseMap)); 4 outSurfaceData.alpha = Alpha(albedoAlpha.a, _BaseColor, _Cutoff); 5 6 half4 specGloss = SampleMetallicSpecGloss(uv, albedoAlpha.a); 7 outSurfaceData.albedo = albedoAlpha.rgb * _BaseColor.rgb; 8 9 #if _SPECULAR_SETUP 10 outSurfaceData.metallic = 1.0h; 11 outSurfaceData.specular = specGloss.rgb; 12 #else 13 outSurfaceData.metallic = specGloss.r; 14 outSurfaceData.specular = half3(0.0h, 0.0h, 0.0h); 15 #endif 16 17 outSurfaceData.smoothness = specGloss.a; 18 outSurfaceData.normalTS = SampleNormal(uv, TEXTURE2D_ARGS(_BumpMap, sampler_BumpMap), _BumpScale); 19 outSurfaceData.occlusion = SampleOcclusion(uv); 20 outSurfaceData.emission = SampleEmission(uv, _EmissionColor.rgb, TEXTURE2D_ARGS(_EmissionMap, sampler_EmissionMap)); 21 }
上面這個初始化表面數據我們可以看到,主要是做了一些貼圖采樣,可以看到Metallic workflow是不需要specular信息的,反過來,Specular workflow不需要metallic信息。光滑度是從metallic或者specular貼圖的alpha通道讀取的。法線被轉換成了切線空間,AO是從AO貼圖上采樣,我們看一下SampleOcclusion方法:
1 half SampleOcclusion(float2 uv) 2 { 3 #ifdef _OCCLUSIONMAP 4 // TODO: Controls things like these by exposing SHADER_QUALITY levels (low, medium, high) 5 #if defined(SHADER_API_GLES) 6 return SAMPLE_TEXTURE2D(_OcclusionMap, sampler_OcclusionMap, uv).g; 7 #else 8 half occ = SAMPLE_TEXTURE2D(_OcclusionMap, sampler_OcclusionMap, uv).g; 9 return LerpWhiteTo(occ, _OcclusionStrength); 10 #endif 11 #else 12 return 1.0; 13 #endif 14 }
可以看到AO讀取的是AO貼圖的g通道,而Metallic是放在金屬度貼圖的R通道,所以如果項目打算用Metallic workflow的小伙伴們可以把AO和金屬度貼圖合並到一張貼圖哦。
然后是InitializeInputData方法:
void InitializeInputData(Varyings input, half3 normalTS, out InputData inputData) { inputData = (InputData)0; #ifdef _ADDITIONAL_LIGHTS inputData.positionWS = input.positionWS; #endif #ifdef _NORMALMAP half3 viewDirWS = half3(input.normalWS.w, input.tangentWS.w, input.bitangentWS.w); inputData.normalWS = TransformTangentToWorld(normalTS, half3x3(input.tangentWS.xyz, input.bitangentWS.xyz, input.normalWS.xyz)); #else half3 viewDirWS = input.viewDirWS; inputData.normalWS = input.normalWS; #endif inputData.normalWS = NormalizeNormalPerPixel(inputData.normalWS); viewDirWS = SafeNormalize(viewDirWS); inputData.viewDirectionWS = viewDirWS; #if defined(_MAIN_LIGHT_SHADOWS) && !defined(_RECEIVE_SHADOWS_OFF) inputData.shadowCoord = input.shadowCoord; #else inputData.shadowCoord = float4(0, 0, 0, 0); #endif inputData.fogCoord = input.fogFactorAndVertexLight.x; inputData.vertexLighting = input.fogFactorAndVertexLight.yzw; inputData.bakedGI = SAMPLE_GI(input.lightmapUV, input.vertexSH, inputData.normalWS); }
我們可以看到方法獲取了一些用於PBR計算的基本屬性。
最后就是關鍵的UniversalFragmentPBR方法:
half4 UniversalFragmentPBR(InputData inputData, half3 albedo, half metallic, half3 specular, half smoothness, half occlusion, half3 emission, half alpha) { BRDFData brdfData; InitializeBRDFData(albedo, metallic, specular, smoothness, alpha, brdfData); Light mainLight = GetMainLight(inputData.shadowCoord); MixRealtimeAndBakedGI(mainLight, inputData.normalWS, inputData.bakedGI, half4(0, 0, 0, 0)); half3 color = GlobalIllumination(brdfData, inputData.bakedGI, occlusion, inputData.normalWS, inputData.viewDirectionWS); color += LightingPhysicallyBased(brdfData, mainLight, inputData.normalWS, inputData.viewDirectionWS); #ifdef _ADDITIONAL_LIGHTS uint pixelLightCount = GetAdditionalLightsCount(); for (uint lightIndex = 0u; lightIndex < pixelLightCount; ++lightIndex) { Light light = GetAdditionalLight(lightIndex, inputData.positionWS); color += LightingPhysicallyBased(brdfData, light, inputData.normalWS, inputData.viewDirectionWS); } #endif #ifdef _ADDITIONAL_LIGHTS_VERTEX color += inputData.vertexLighting * brdfData.diffuse; #endif color += emission; return half4(color, alpha); }
可以看到首先准備了BRDF的數據,在准備BRDF數據的方法中我們看到有以下代碼:
#ifdef _SPECULAR_SETUP half reflectivity = ReflectivitySpecular(specular); half oneMinusReflectivity = 1.0 - reflectivity; outBRDFData.diffuse = albedo * (half3(1.0h, 1.0h, 1.0h) - specular); outBRDFData.specular = specular; #else half oneMinusReflectivity = OneMinusReflectivityMetallic(metallic); half reflectivity = 1.0 - oneMinusReflectivity; outBRDFData.diffuse = albedo * oneMinusReflectivity; outBRDFData.specular = lerp(kDieletricSpec.rgb, albedo, metallic); #endif
這幾句代碼體現了不同的工作流對應的輸入是如何被轉化成同樣的參數用來計算BRDF的。如果是Specular workflow,則會通過Specular貼圖來代表高光反射的顏色,或者說高光在各個顏色通道相對於漫反射的權重。因為我們知道PBR是能量守恆的,假設輸入的光照能量為1,那么漫反射+高光反射的能量就絕對不能超過1,所以漫反射會通過1-specularFactor的方式來計算權重。需要注意的是1-specularFactor只是一個形象的說法,並不是說就是這樣計算的。具體計算方法我們參考一下OneMinusReflectivityMetallic方法:
// We'll need oneMinusReflectivity, so // 1-reflectivity = 1-lerp(dielectricSpec, 1, metallic) = lerp(1-dielectricSpec, 0, metallic) // store (1-dielectricSpec) in kDieletricSpec.a, then // 1-reflectivity = lerp(alpha, 0, metallic) = alpha + metallic*(0 - alpha) = // = alpha - metallic * alpha half oneMinusDielectricSpec = kDieletricSpec.a; return oneMinusDielectricSpec - metallic * oneMinusDielectricSpec;
從上面的代碼我們可以看出,之所以不能簡單的使用1-specularFactor,是因為當金屬度為0時(即當為非導體時),反射率並不是0,也就是說金屬度不等於反射率,雖然他們成正比例關系。我們最終需要通過反射率來表示高光反射的比率,而不是金屬度。
同樣,在Metallic workflow中也是一樣的,因為金屬度越高,反射的高光越強(即高光反射率越高),漫反射越弱。
需要注意的是Metallic workflow的Specular的計算方式,是通過非電解質(非金屬)的顏色和albedo進行插值,也就是albedo不僅會影響漫反射,還會影響鏡面反射(高光反射),但是Specular workflow則不會。
通過比較兩種workflow我們可以發現,metallic工作流的優勢在於不必關心高光貼圖該怎么畫,只用搞清楚這個材質的金屬度參數就可以。而specular工作流必須根據材質需求正確的畫出specular貼圖,對美術有一定要求。speculr的優勢在於可以通過specular貼圖做一些高光的偏色,通過靈活的使用specular貼圖來達到有偏色的高光反射,更容易做風格化。總之兩種工作流各有優勢,目前游戲主流還是采用Metallic workflow工作流。
接下來獲取主光源,確定了一下是否有主光源參與光照貼圖計算。這里用到了_MIXED_LIGHTING_SUBTRACTIVE keyword,這個keyword是在ForwardLights.Setup中調用的:
CoreUtils.SetKeyword(commandBuffer, ShaderKeywordStrings.MixedLightingSubtractive, renderingData.lightData.supportsMixedLighting && this.m_MixedLightingSetup == MixedLightingSetup.Subtractive);
而在InitializeLightConstants方法中我們可以看到當混合光照模式是Subtractive時,有以下指令:
m_MixedLightingSetup = MixedLightingSetup.Subtractive;
SubtractDirectMainLightFromLightmap方法將實時光陰影剔除出光照貼圖的計算(實際上整個過程就是算陰影)。具體剔除的過程如下:
half shadowStrength = GetMainLightShadowStrength(); half contributionTerm = saturate(dot(mainLight.direction, normalWS)); half3 lambert = mainLight.color * contributionTerm; half3 estimatedLightContributionMaskedByInverseOfShadow = lambert * (1.0 - mainLight.shadowAttenuation); half3 subtractedLightmap = bakedGI - estimatedLightContributionMaskedByInverseOfShadow; half3 realtimeShadow = max(subtractedLightmap, _SubtractiveShadowColor.xyz); realtimeShadow = lerp(bakedGI, realtimeShadow, shadowStrength); return min(bakedGI, realtimeShadow);
從代碼中可以看出對於平行光照貢獻的預估是用最簡單的蘭伯特光照模型計算的,因為lightmap上不用關心高光反射。然后通過陰影衰減反推出光照貢獻度,乘上光照貢獻顏色,再用烘焙光的顏色減去,算出了沒有平行光的的光照貼圖(即陰影和環境光),然后根據陰影強度做插值。
算完陰影后,就開始算GI了:
half3 GlobalIllumination(BRDFData brdfData, half3 bakedGI, half occlusion, half3 normalWS, half3 viewDirectionWS) { half3 reflectVector = reflect(-viewDirectionWS, normalWS); half fresnelTerm = Pow4(1.0 - saturate(dot(normalWS, viewDirectionWS))); half3 indirectDiffuse = bakedGI * occlusion; half3 indirectSpecular = GlossyEnvironmentReflection(reflectVector, brdfData.perceptualRoughness, occlusion); return EnvironmentBRDF(brdfData, indirectDiffuse, indirectSpecular, fresnelTerm); }
通過方法里面准備的數據可以看出GI主要算的是環境漫反射和環境高光反射,漫反射通過光照貼圖和環境遮蔽算出,高光反射由以下 方法算出:
half3 GlossyEnvironmentReflection(half3 reflectVector, half perceptualRoughness, half occlusion) { #if !defined(_ENVIRONMENTREFLECTIONS_OFF) half mip = PerceptualRoughnessToMipmapLevel(perceptualRoughness); half4 encodedIrradiance = SAMPLE_TEXTURECUBE_LOD(unity_SpecCube0, samplerunity_SpecCube0, reflectVector, mip); #if !defined(UNITY_USE_NATIVE_HDR) half3 irradiance = DecodeHDREnvironment(encodedIrradiance, unity_SpecCube0_HDR); #else half3 irradiance = encodedIrradiance.rbg; #endif return irradiance * occlusion; #endif // GLOSSY_REFLECTIONS return _GlossyEnvironmentColor.rgb * occlusion; }
通過視野的反射方向采樣cubemap得到環境光高光項入射光,采樣結果同樣也會和環境遮蔽相乘。所以我們可以看到,AO貼圖影響的不僅僅是環境漫反射,還有環境高光反射。
最后進行環境光BRDF計算:
half3 EnvironmentBRDF(BRDFData brdfData, half3 indirectDiffuse, half3 indirectSpecular, half fresnelTerm) { half3 c = indirectDiffuse * brdfData.diffuse; float surfaceReduction = 1.0 / (brdfData.roughness2 + 1.0); c += surfaceReduction * indirectSpecular * lerp(brdfData.specular, brdfData.grazingTerm, fresnelTerm); return c; }
首先將brdf漫反射數據(非金屬權重乘baseMap)和環境光漫反射(已經在烘焙光照的時候算好,從光照貼圖中讀取)相乘,得出最后環境光漫反射結果。接下來算環境高光反射:
indirectSpecular時環境入射光,brdfData.specular是高光顏色,brdfData.grazingTerm是光滑度+反射率(剛才忘記說了,嘻嘻),fresnelTerm是菲涅耳項,這樣我們就能看出環境高光反射的公式:
高光輸出 = 環境入射光*lerp(高光,(光滑度+反射率),菲涅耳項)/(粗糙度平方+1)
GI項算完了,接下來就是計算直射光的方法:
half3 DirectBDRF(BRDFData brdfData, half3 normalWS, half3 lightDirectionWS, half3 viewDirectionWS) { #ifndef _SPECULARHIGHLIGHTS_OFF float3 halfDir = SafeNormalize(float3(lightDirectionWS) + float3(viewDirectionWS)); float NoH = saturate(dot(normalWS, halfDir)); half LoH = saturate(dot(lightDirectionWS, halfDir)); // GGX Distribution multiplied by combined approximation of Visibility and Fresnel // BRDFspec = (D * V * F) / 4.0 // D = roughness^2 / ( NoH^2 * (roughness^2 - 1) + 1 )^2 // V * F = 1.0 / ( LoH^2 * (roughness + 0.5) ) // See "Optimizing PBR for Mobile" from Siggraph 2015 moving mobile graphics course // https://community.arm.com/events/1155 // Final BRDFspec = roughness^2 / ( NoH^2 * (roughness^2 - 1) + 1 )^2 * (LoH^2 * (roughness + 0.5) * 4.0) // We further optimize a few light invariant terms // brdfData.normalizationTerm = (roughness + 0.5) * 4.0 rewritten as roughness * 4.0 + 2.0 to a fit a MAD. float d = NoH * NoH * brdfData.roughness2MinusOne + 1.00001f; half LoH2 = LoH * LoH; half specularTerm = brdfData.roughness2 / ((d * d) * max(0.1h, LoH2) * brdfData.normalizationTerm); // On platforms where half actually means something, the denominator has a risk of overflow // clamp below was added specifically to "fix" that, but dx compiler (we convert bytecode to metal/gles) // sees that specularTerm have only non-negative terms, so it skips max(0,..) in clamp (leaving only min(100,...)) #if defined (SHADER_API_MOBILE) || defined (SHADER_API_SWITCH) specularTerm = specularTerm - HALF_MIN; specularTerm = clamp(specularTerm, 0.0, 100.0); // Prevent FP16 overflow on mobiles #endif half3 color = specularTerm * brdfData.specular + brdfData.diffuse; return color; #else return brdfData.diffuse; #endif }
注釋中說明了公式,我們先往下看UniversalFragmentPBR方法:
#ifdef _ADDITIONAL_LIGHTS uint pixelLightCount = GetAdditionalLightsCount(); for (uint lightIndex = 0u; lightIndex < pixelLightCount; ++lightIndex) { Light light = GetAdditionalLight(lightIndex, inputData.positionWS); color += LightingPhysicallyBased(brdfData, light, inputData.normalWS, inputData.viewDirectionWS); } #endif #ifdef _ADDITIONAL_LIGHTS_VERTEX color += inputData.vertexLighting * brdfData.diffuse; #endif color += emission;
算完主光源之后開始算additionallight,最后附加上頂點光照和自發光,輸出片元。
LitShader的ForwardPass我們大致過了一遍,對於shader中的計算先后順序,輸入信息如何影響最終渲染效果也有了大致了解,但是中間遇到的PBR相關公式以及做了哪些優化我們並沒有細說,我們放到下一節一起來學習。
二次編輯:2020-4-15 修正烘焙陰影為陰影
原因:寫博客時腦子瓦特了,這分明算的是實時陰影