上一節我們知道了SRP的作用和基本用法,現在來看一下我們的主角--URP是個怎樣的管線把!
Unity2019.3及之后的版本才能看到URP這個package,前身是LWRP,可以創建默認管線項目然后手動導入,或者直接創建URP項目。導入之后我們開始分析URP的代碼:
回到我們最開始學習SRP時的感覺,那時我們填充了一個Render函數后,寫了幾個簡單的shader完成了我們想要的效果,現在我們重新來看URP中的Render函數是怎樣的:
protected override void Render(ScriptableRenderContext renderContext, Camera[] cameras) { BeginFrameRendering(renderContext, cameras); GraphicsSettings.lightsUseLinearIntensity = (QualitySettings.activeColorSpace == ColorSpace.Linear); GraphicsSettings.useScriptableRenderPipelineBatching = asset.useSRPBatcher; SetupPerFrameShaderConstants(); SortCameras(cameras); foreach (Camera camera in cameras) { BeginCameraRendering(renderContext, camera); #if VISUAL_EFFECT_GRAPH_0_0_1_OR_NEWER //It should be called before culling to prepare material. When there isn't any VisualEffect component, this method has no effect. VFX.VFXManager.PrepareCamera(camera); #endif RenderSingleCamera(renderContext, camera); EndCameraRendering(renderContext, camera); } EndFrameRendering(renderContext, cameras); }
剛看到代碼的筆者我一臉懵逼,由於封裝了很多方法,而且有些還看不到源碼,所以一時間不知道某些方法都干了什么操作,但只能硬着頭皮上啊,把能看到的好好解析一下,看能不能推斷出隱藏的方法做了什么小操作。
首先是BeginFrameRendering方法
這是官方對於這個方法的描述,這幾句話基本上等於沒說,而且我們在做SRP的時候也發現了即便是沒有這個方法也可以正常渲染,所以為了探究這個問題我查閱了許多資料,結果還是沒找到結論,於是氣急敗壞的我直接把Begin和End兩個方法注釋掉,果然如我所料,沒有對任何效果產生影響,如下圖(注釋后):
所以暫且先不探究這個方法做了什么,繼續往下走:
GraphicsSettings.lightsUseLinearIntensity = (QualitySettings.activeColorSpace == ColorSpace.Linear);
GraphicsSettings.useScriptableRenderPipelineBatching = asset.useSRPBatcher;
這回是兩個圖形設置:第一個是線性空間的選擇,第二個是是否使用SRPBatcher(這個之后我們會說)
基本上PBR都會用Linear空間,在此就不去詳述了。
接下來是SetupPerFrameShaderConstants方法:
static void SetupPerFrameShaderConstants() { // When glossy reflections are OFF in the shader we set a constant color to use as indirect specular SphericalHarmonicsL2 ambientSH = RenderSettings.ambientProbe; Color linearGlossyEnvColor = new Color(ambientSH[0, 0], ambientSH[1, 0], ambientSH[2, 0]) * RenderSettings.reflectionIntensity; Color glossyEnvColor = CoreUtils.ConvertLinearToActiveColorSpace(linearGlossyEnvColor); Shader.SetGlobalVector(PerFrameBuffer._GlossyEnvironmentColor, glossyEnvColor); // Used when subtractive mode is selected Shader.SetGlobalVector(PerFrameBuffer._SubtractiveShadowColor, CoreUtils.ConvertSRGBToActiveColorSpace(RenderSettings.subtractiveShadowColor)); }
主要設置了未開啟環境反射時的默認顏色、陰影顏色。我們之前的SRP里沒有寫。但是為了證實這個代碼真實的在起作用,我還是調皮的把這個方法的修改了一下,將glossyEnvColor設為白色,將Enviorment Reflection宏關掉,然后有了以下對比圖:
注釋SetupPerFrameShaderConstants前:
注釋后:
結果竟然完全一樣,又是一臉懵逼!!
於是我把整個方法的內容都禁用掉,還是一樣,這是,我有了一個猜想,就是只要設置一次_GlossyEnvironmentColor,這個顏色就會被緩存在某個地方,於是我把代碼注釋取消,將顏色改為綠色,然后在注釋代碼,之后果然是如下效果:
疑問解決了,接着往下走,調用了SortCameras方法,方法實現如下:
private void SortCameras(Camera[] cameras) { Array.Sort<Camera>(cameras, (Comparison<Camera>) ((lhs, rhs) => (int) ((double) lhs.depth - (double) rhs.depth))); }
可以看到相機排序主要是通過相機的深度進行的,這個影響者先渲染哪個相機,后渲染哪個相機,問題不大,繼續前進!!
foreach (Camera camera in cameras) { BeginCameraRendering(renderContext, camera); #if VISUAL_EFFECT_GRAPH_0_0_1_OR_NEWER //It should be called before culling to prepare material. When there isn't any VisualEffect component, this method has no effect. VFX.VFXManager.PrepareCamera(camera); #endif RenderSingleCamera(renderContext, camera); EndCameraRendering(renderContext, camera); }
根據相機排序好的順序,逐相機渲染。我們可以看到主要有三個方法,VISUAL_EFFECT先忽略不計(手動滑稽),之后有時間專門講一下。
首先看BeginCameraRendering和EndCameraRendering,好吧,看不到,官方是這樣介紹的:
忽然覺得這個描述好像在哪里見過?這不就是上個描述的Copy+Paste嗎,不過有些關鍵詞還是引起了我的注意:planar reflections
於是猜測可能是做了一些off-screen的相關設置吧。但是為了確定其實際作用,還是老方法--注釋!
然而發現注釋前后都是可以渲染到紋理,如下:
既然不影響渲染,那么繼續往下走,等影響到了在探究原因!
那么就到了最關鍵的RenderSingleCamera函數了,函數如下:
public static void RenderSingleCamera(ScriptableRenderContext context, Camera camera) { if (!camera.TryGetCullingParameters(IsStereoEnabled(camera), out var cullingParameters)) return; var settings = asset; UniversalAdditionalCameraData additionalCameraData = null; if (camera.cameraType == CameraType.Game || camera.cameraType == CameraType.VR) camera.gameObject.TryGetComponent(out additionalCameraData); InitializeCameraData(settings, camera, additionalCameraData, out var cameraData); SetupPerCameraShaderConstants(cameraData); ScriptableRenderer renderer = (additionalCameraData != null) ? additionalCameraData.scriptableRenderer : settings.scriptableRenderer; if (renderer == null) { Debug.LogWarning(string.Format("Trying to render {0} with an invalid renderer. Camera rendering will be skipped.", camera.name)); return; } string tag = (asset.debugLevel >= PipelineDebugLevel.Profiling) ? camera.name: k_RenderCameraTag; CommandBuffer cmd = CommandBufferPool.Get(tag); using (new ProfilingSample(cmd, tag)) { renderer.Clear(); renderer.SetupCullingParameters(ref cullingParameters, ref cameraData); context.ExecuteCommandBuffer(cmd); cmd.Clear(); #if UNITY_EDITOR // Emit scene view UI if (cameraData.isSceneViewCamera) ScriptableRenderContext.EmitWorldGeometryForSceneView(camera); #endif var cullResults = context.Cull(ref cullingParameters); InitializeRenderingData(settings, ref cameraData, ref cullResults, out var renderingData); renderer.Setup(context, ref renderingData); renderer.Execute(context, ref renderingData); } context.ExecuteCommandBuffer(cmd); CommandBufferPool.Release(cmd); context.Submit(); }
雖然看起來很多,但是實際上我們終於看到了熟悉的代碼,在上篇SRP中我們用到了Culling方法,不着急,一點一點看:
首先通過TryGetCullingParameters方法獲取相機視錐裁剪參數,然后獲取asset中的設置,asset中的設置大致都在這個面板上:
額,圖片傳不上去,可能是因為超過上限?總之找到UniversalRP那個文件的Inspector就可以看見了。
URP中所有相機都掛着UniversalAdditionalCameraData腳本,這個腳本是覆寫Camera的Inspector的數據,其實可以當作以前相機組件上的參數。
在InitializeCameraData中傳入相機參數返回一個相機信息,我們看看這個方法的實現:
static void InitializeCameraData(UniversalRenderPipelineAsset settings, Camera camera, UniversalAdditionalCameraData additionalCameraData, out CameraData cameraData) { const float kRenderScaleThreshold = 0.05f; cameraData = new CameraData(); cameraData.camera = camera; cameraData.isStereoEnabled = IsStereoEnabled(camera); int msaaSamples = 1; if (camera.allowMSAA && settings.msaaSampleCount > 1) msaaSamples = (camera.targetTexture != null) ? camera.targetTexture.antiAliasing : settings.msaaSampleCount; cameraData.isSceneViewCamera = camera.cameraType == CameraType.SceneView; cameraData.isHdrEnabled = camera.allowHDR && settings.supportsHDR; Rect cameraRect = camera.rect; cameraData.isDefaultViewport = (!(Math.Abs(cameraRect.x) > 0.0f || Math.Abs(cameraRect.y) > 0.0f || Math.Abs(cameraRect.width) < 1.0f || Math.Abs(cameraRect.height) < 1.0f)); // If XR is enabled, use XR renderScale. // Discard variations lesser than kRenderScaleThreshold. // Scale is only enabled for gameview. float usedRenderScale = XRGraphics.enabled ? XRGraphics.eyeTextureResolutionScale : settings.renderScale; cameraData.renderScale = (Mathf.Abs(1.0f - usedRenderScale) < kRenderScaleThreshold) ? 1.0f : usedRenderScale; cameraData.renderScale = (camera.cameraType == CameraType.Game) ? cameraData.renderScale : 1.0f; bool anyShadowsEnabled = settings.supportsMainLightShadows || settings.supportsAdditionalLightShadows; cameraData.maxShadowDistance = Mathf.Min(settings.shadowDistance, camera.farClipPlane); cameraData.maxShadowDistance = (anyShadowsEnabled && cameraData.maxShadowDistance >= camera.nearClipPlane) ? cameraData.maxShadowDistance : 0.0f; if (additionalCameraData != null) { cameraData.maxShadowDistance = (additionalCameraData.renderShadows) ? cameraData.maxShadowDistance : 0.0f; cameraData.requiresDepthTexture = additionalCameraData.requiresDepthTexture; cameraData.requiresOpaqueTexture = additionalCameraData.requiresColorTexture; cameraData.volumeLayerMask = additionalCameraData.volumeLayerMask; cameraData.volumeTrigger = additionalCameraData.volumeTrigger == null ? camera.transform : additionalCameraData.volumeTrigger; cameraData.postProcessEnabled = additionalCameraData.renderPostProcessing; cameraData.isStopNaNEnabled = cameraData.postProcessEnabled && additionalCameraData.stopNaN && SystemInfo.graphicsShaderLevel >= 35; cameraData.isDitheringEnabled = cameraData.postProcessEnabled && additionalCameraData.dithering; cameraData.antialiasing = cameraData.postProcessEnabled ? additionalCameraData.antialiasing : AntialiasingMode.None; cameraData.antialiasingQuality = additionalCameraData.antialiasingQuality; } else if(camera.cameraType == CameraType.SceneView) { cameraData.requiresDepthTexture = settings.supportsCameraDepthTexture; cameraData.requiresOpaqueTexture = settings.supportsCameraOpaqueTexture; cameraData.volumeLayerMask = 1; // "Default" cameraData.volumeTrigger = null; cameraData.postProcessEnabled = CoreUtils.ArePostProcessesEnabled(camera); cameraData.isStopNaNEnabled = false; cameraData.isDitheringEnabled = false; cameraData.antialiasing = AntialiasingMode.None; cameraData.antialiasingQuality = AntialiasingQuality.High; } else { cameraData.requiresDepthTexture = settings.supportsCameraDepthTexture; cameraData.requiresOpaqueTexture = settings.supportsCameraOpaqueTexture; cameraData.volumeLayerMask = 1; // "Default" cameraData.volumeTrigger = null; cameraData.postProcessEnabled = false; cameraData.isStopNaNEnabled = false; cameraData.isDitheringEnabled = false; cameraData.antialiasing = AntialiasingMode.None; cameraData.antialiasingQuality = AntialiasingQuality.High; } // Disables post if GLes2 cameraData.postProcessEnabled &= SystemInfo.graphicsDeviceType != GraphicsDeviceType.OpenGLES2; cameraData.requiresDepthTexture |= cameraData.isSceneViewCamera || cameraData.postProcessEnabled; var commonOpaqueFlags = SortingCriteria.CommonOpaque; var noFrontToBackOpaqueFlags = SortingCriteria.SortingLayer | SortingCriteria.RenderQueue | SortingCriteria.OptimizeStateChanges | SortingCriteria.CanvasOrder; bool hasHSRGPU = SystemInfo.hasHiddenSurfaceRemovalOnGPU; bool canSkipFrontToBackSorting = (camera.opaqueSortMode == OpaqueSortMode.Default && hasHSRGPU) || camera.opaqueSortMode == OpaqueSortMode.NoDistanceSort; cameraData.defaultOpaqueSortFlags = canSkipFrontToBackSorting ? noFrontToBackOpaqueFlags : commonOpaqueFlags; cameraData.captureActions = CameraCaptureBridge.GetCaptureActions(camera); bool needsAlphaChannel = camera.targetTexture == null && Graphics.preserveFramebufferAlpha && PlatformNeedsToKillAlpha(); cameraData.cameraTargetDescriptor = CreateRenderTextureDescriptor(camera, cameraData.renderScale, cameraData.isStereoEnabled, cameraData.isHdrEnabled, msaaSamples, needsAlphaChannel); }
又是好大一坨,不着急,慢慢欣賞代碼藝術。
好吧,沒什么好欣賞的,都是一些基本參數的賦值,在其中其實我們可以看到一些效果的使用前提:如Postprocess需要OpenGLES2以上等
接下來就是SetupPerCameraShaderConstants方法:
static void SetupPerCameraShaderConstants(CameraData cameraData) { Camera camera = cameraData.camera; float scaledCameraWidth = (float)cameraData.camera.pixelWidth * cameraData.renderScale; float scaledCameraHeight = (float)cameraData.camera.pixelHeight * cameraData.renderScale; Shader.SetGlobalVector(PerCameraBuffer._ScaledScreenParams, new Vector4(scaledCameraWidth, scaledCameraHeight, 1.0f + 1.0f / scaledCameraWidth, 1.0f + 1.0f / scaledCameraHeight)); Shader.SetGlobalVector(PerCameraBuffer._WorldSpaceCameraPos, camera.transform.position); float cameraWidth = (float)cameraData.camera.pixelWidth; float cameraHeight = (float)cameraData.camera.pixelHeight; Shader.SetGlobalVector(PerCameraBuffer._ScreenParams, new Vector4(cameraWidth, cameraHeight, 1.0f + 1.0f / cameraWidth, 1.0f + 1.0f / cameraHeight)); Matrix4x4 projMatrix = GL.GetGPUProjectionMatrix(camera.projectionMatrix, false); Matrix4x4 viewMatrix = camera.worldToCameraMatrix; Matrix4x4 viewProjMatrix = projMatrix * viewMatrix; Matrix4x4 invViewProjMatrix = Matrix4x4.Inverse(viewProjMatrix); Shader.SetGlobalMatrix(PerCameraBuffer._InvCameraViewProj, invViewProjMatrix); }
主要是對四個Shader全局變量的賦值:_ScaledScreenParams、_WorldSpaceCameraPos、_ScreenParams、_InvCameraViewProj
具體用來做什么之后我們講解URP的Shader時會講到(不過看字面意思也知道時干啥的~啊哈哈哈)
接下來就是重點了,URP將渲染方案封裝在ForwardRenderer中,由於篇幅原因,下節我們來學習一下ForwardRenderer!!