1.Graphics.Blit:Copies source texture into destination render texture with a shader
聲明:
1.public static void Blit(Texture source, RenderTexture dest, Material mat(缺省), int pass = -1(缺省));
2.public static void Blit(Texture source, RenderTexture dest, Vector2 scale, Vector2 offset);
source是源紋理,dest是目標紋理(null則直接輸出到屏幕),mat表示用來渲染的材質,pass表示使用材質的哪個通道(-1表示所有通道)
Blit會設置source為mat的_MainTex變量,並把輸出存給dest,dest如果為Null,blit會把輸出存給屏幕緩沖區,
但是,如果main camera(Camera.main)的targetTexture不是Null,則dest輸出給的是Camera.main.targetTexture,而不是屏幕緩沖區!
要想獲得源紋理(一般來自當前渲染結果)的更多信息,我們需要把它作為源,通過shader把源的各種信息(法線、位置、深度等)拉到多個rt中使用,
但是,You can't use the Graphics.Blit since it only set one RenderTexture internally。也就是說blit只能輸出到一個rt中,不能切換。
這個時候就用到MRT技術了:
一般是先SetRenderTarget設置rb數組接收多個渲染結果,接着設MRTMat的mainTex為source,然后renderQuad把渲染結果寫入rb數組,這樣就能得到源紋理的許多信息了:
來自:http://whisperlin.blog.163.com/blog/static/6052371020141287150719/
RenderTexture oldRT = RenderTexture.active; Graphics.SetRenderTarget(mrtRB, mrtTex[0].depthBuffer); testMRTMaterial.SetTexture("_mainTex", source);//我加的,因為要取源紋理的信息,所以要設置一下,才能取到 testMRTMaterial.SetPass(0); //Pass 0 outputs 2 render textures. GL.Clear(false, true, Color.clear); GL.PushMatrix(); GL.LoadOrtho(); //Render the full screen quad manually.MRT需要這樣做:When using MRT, you have to render the full screen quad manually GL.Begin(GL.QUADS); GL.TexCoord2(0.0f, 0.0f); GL.Vertex3(0.0f, 0.0f, 0.1f); GL.TexCoord2(1.0f, 0.0f); GL.Vertex3(1.0f, 0.0f, 0.1f); GL.TexCoord2(1.0f, 1.0f); GL.Vertex3(1.0f, 1.0f, 0.1f); GL.TexCoord2(0.0f, 1.0f); GL.Vertex3(0.0f, 1.0f, 0.1f); GL.End(); GL.PopMatrix(); //到這里已經把source的信息提取到mrtRB了,至於提取什么信息就看你testMRTMaterial的shader的第0通道的片段着色器返回什么了(這種片段着色器寫法請參考鏈接) RenderTexture.active = oldRT;//歸還給屏幕 //Show the result testMRTMaterial.SetTexture("_Tex0", mrtTex[0]); testMRTMaterial.SetTexture("_Tex1", mrtTex[1]); Graphics.Blit(source, destination, testMRTMaterial, 1);//通道1是用_Tex0/1處理源紋理
所謂MRT技術無非就是獲得更多渲染信息並用於一些處理
根據Blit的描述:Blit sets dest as the render target, sets source _MainTex property on the material, and draws a full-screen quad
對照上面MRT的做法,基本可以認為,SetRenderTarget+setMRTMatTex+GL_RenderFullScreenQuad = Blit,也就是說MRT只是用其他方法來解決blit不能多次改變rt的缺點
2.Graphics.SetRenderTarget:Sets current render target
聲明:
1.SetRenderTarget(RenderTexture rt, int mipLevel(缺省), CubemapFace face(缺省), int depthSlice(缺省));
2.SetRenderTarget(RenderBuffer colorBuffer, RenderBuffer depthBuffer, int mipLevel(缺省), CubemapFace face(缺省), int depthSlice(缺省));
3.SetRenderTarget(RenderBuffer[] colorBuffers, RenderBuffer depthBuffer);
SetRenderTarget的作用是把gpu渲染的中間紋理或最終紋理輸出到這個函數的參數中!有點像向GPU索取數據。
SetRenderTarget(RenderTexture rt)和 RenderTexture.active = rt作用一樣,都是把渲染的結果存到rt中,如果為null則輸出到屏幕,如果只是想要某個camera的渲染結果,用Camera.targetTexture=rt代替。
輸出到rt或者colorBuffer區別應該不大,只是接收數據的數據結構不同,畢竟RenderBuffer是RenderTexture中存儲RGB圖和深度圖數據的緩存格式,
RenderTexture.colorBuffer就是RenderBuffer格式。一般我們可以看到輸入的RenderBuffer也是先定義一個RT,然后取其rb類型變量colorBuffer來輸入。
特別地,第3個聲明用來搞MRT,就是一個渲染輸出多個紋理數據到buffers中,可以用這些buffers來輸出多個圖片或者用來做多樣后期效果。可以參考
以前關於MRT的筆記或這篇博文:http://blog.csdn.net/ylbs110/article/details/53457576
3.Graphics.BlitMultiTap:Copies source texture into destination, for multi-tap shader
聲明:
void BlitMultiTap(Texture source, RenderTexture dest, Material mat, params Vector2[] offsets);
一般用來做post-processing effect,比如模糊。和Blit類似,特別地,offsets數組表示頂點紋理采樣時的紋理坐標偏移,多個offsets就采樣結果混合,比如模糊源紋理時可以offsets:new Vector2(-off,-off),new Vector2(off,-off),new Vector2(-off,off),new Vector2(off,off),這個off是像素單位的。
4.Graphics.ConvertTexture:
聲明:
1.bool ConvertTexture(Texture src, Texture dst);
2.bool ConvertTexture(Texture src, int srcElement, Texture dst, int dstElement);
把src轉換成其他格式或尺寸,但目標紋理不能是壓縮類格式,成功返回true,對第2個聲明的使用不了解,網上沒多少例子。
This function provides an efficient way to convert between textures of different formats and dimensions.
The destination texture format should be uncompressed and correspond to a supported RenderTextureFormat.
支持的格式有Currently supported are 2d and cubemap textures as the source, and 2d, cubemap, 2d array and cubemap array textures as the destination.
5.Graphics.CopyTexture:Copy texture contents
聲明:
1.void CopyTexture(Texture src, Texture dst);
2.void CopyTexture(Texture src, int srcElement, int srcMip, Texture dst, int dstElement, int dstMip);
3.void CopyTexture(Texture src, int srcElement, int srcMip, int srcX, int srcY, int srcWidth, int srcHeight,
Texture dst, int dstElement, int dstMip, int dstX, int dstY);
可以整個拷貝,也可以拷貝一部分,可以根據mipmap level拷貝
注意:
1.拷貝只是單純拷貝,不會尺寸縮放
2.源紋理和目標紋理的格式要兼容,如TextureFormat.ARGB32 和 RenderTextureFormat.ARGB32
3.壓縮類格式的源紋理的區域拷貝有限制,比如PVRTC格式只能整個原圖拷貝或整個mip level拷貝,DXT, ETC格式區域拷貝必須滿足:
the region size and coordinates must be a multiple of compression block size(4 pixels for DXT)。
4.可能2個紋理需要設為readable,因為原文這樣說:
If both source and destination textures are marked as "readable" , these functions copy it as well.
5.更多平台支持信息可查CopyTextureSupport, and use SystemInfo.copyTextureSupport to check
6.Graphics.DrawMesh:Draw a mesh
聲明:
1.DrawMesh(Mesh mesh, Vector3 position, Quaternion rotation, Material material, int layer, Camera camera, int submeshIndex, MaterialPropertyBlock properties, Rendering.ShadowCastingMode castShadows, bool receiveShadows = true, Transform probeAnchor = null, bool useLightProbes = true);
2.void DrawMesh(Mesh mesh, Matrix4x4 matrix, Material material, int layer, Camera camera, int submeshIndex, MaterialPropertyBlock properties, Rendering.ShadowCastingMode castShadows, bool receiveShadows = true, Transform probeAnchor = null, bool useLightProbes = true);
DrawMesh不需要生成模型,不用管理大量gameobject,直接繪制網格(一般而言,我們是在hierarchy生成gobj,然后unity渲染gobj的底層就調用drawmesh,我們直接調用下面的渲染接口,更高效,所以Use DrawMesh in situations where you want to draw large amount of meshes)
mesh是將要繪制的Mesh,position是位置,rota是朝向,mat是渲染選擇的材質,layer是mesh的layer,camera是渲染到哪個攝像機(null是所有),subMeshIndex和materialIndex表示繪制哪個子網格(每個子網格對應一個材質)參考本文檔第4大點;properties也是很重要的,因為drawmesh是延時的,drawmesh之前設置mat的參數是不能及時生效的,所以如果多個mesh公用一個mat,就亂了,這個參數就是用來解決這個問題的,多個mesh公用一個mat但參數不同;
castShadows,receiveShadows,useLightProbes,表示drawMesh得到的mesh是受光照影響的,幾乎等同於場景內一個普通模型;
probeAnchor(如果使用光照探頭,這個Tranform的位置就是用來采樣光照探頭的地方並 find the matching reflection probe)
matrix是combines position, rotation and other transformations。
DrawMesh是必須一幀繪制一次,一般在Update里調用,消耗的資源主要是從調用到渲染幀結束期間需要分配資源,性能大致測試過,應該沒問題
7.Graphics.DrawMeshNow:Draw a mesh immediately,Currently set shader and material (use Material.SetPass) will be used
聲明:
void DrawMeshNow(Mesh mesh, Vector3 position, Quaternion rotation, int materialIndex);
立即繪制網格,不受光照影響,不是每幀調用,只在OnPostRender中調用,這個OnPostRender必須掛在Camera才有效:
public Mesh mesh; public Material mat; public void OnPostRender() { // set first shader pass of the material,必須設置,不然沒材質渲染網格 mat.SetPass(0); // draw mesh at the origin Graphics.DrawMeshNow(mesh, Vector3.zero, Quaternion.identity); }
8.Graphics.DrawMeshInstanced:Draw the same mesh multiple times using GPU instancing
聲明:
void DrawMeshInstanced(Mesh mesh, int submeshIndex, Material material, Matrix4x4[] matrices, int count = matrices.Length, MaterialPropertyBlock properties = null, Rendering.ShadowCastingMode castShadows = ShadowCastingMode.On, bool receiveShadows = true, int layer = 0, Camera camera = null);
matrices數組是不同網格的transform數組,其他的類似drawmesh
DrawMeshInstanced和DrawMesh很像,每幀調用一次,不生成gobj直接繪制網格,在需要繪制大量相同的網格(相同材質/不同材質參數/不同Transform參數)時調用
材質的shader需要using an instanced shader,material 必須設置Material.enableInstancing為true,需要硬件支持:See SystemInfo.supportsInstancing
此外,還有其他限制:
1.使用Lightmap的物體
2.受不同Light Probe / Reflection Probe影響的物體
3.使用包含多個Pass的Shader的物體,只有第一個Pass可以Instancing前向渲染時,受多個光源影響的物體只有Base Pass可以instancing,Add Passes不行
值得注意的是,only draw a maximum of 1023 instances at once,並且因為用了GPU Instance技術,所以(應該是不走通用渲染管線流程了):
Meshes are not further culled by the view frustum or baked occluders, nor sorted for transparency or z efficiency
9.MaterialPropertyBlock:A block of material values to apply
這個不是接口,但和6,8關聯很大,所以拉出了溜溜,is used by Graphics.DrawMesh/DrawMeshInstanced and Renderer.SetPropertyBlock.
block的傳遞是拷貝的方式的,所以最高效的使用方式是只定義一個,使用時設置一下傳給函數:
the most efficient way of using it is to create one block and reuse it for all DrawMesh calls
對drawMesh一般是用setfloat等完成一次設置,然后drawmesh一次,但DrawMeshInstanced則不同,因為物體有很多,所以需要用setfloatarray,然后在shader里面應該也處理一下(沒看過instance shader,不了解),這樣每個mesh都對應不同參數了。
另外,block的方式改變材質參數比直接材質set參數要高效:http://www.jianshu.com/p/eff18c57fa42
10.Graphics.DrawMeshInstancedIndirect:Draw the same mesh multiple times using GPU instancing
聲明:
void DrawMeshInstancedIndirect(Mesh mesh, int submeshIndex, Material material, Bounds bounds, ComputeBuffer bufferWithArgs, int argsOffset = 0, MaterialPropertyBlock properties = null, Rendering.ShadowCastingMode castShadows = ShadowCastingMode.On, bool receiveShadows = true, int layer = 0, Camera camera = null);
和這個DrawMeshInstanced類似,但沒有上限1024的限制
bufferWithArgs:需要繪制的每個mesh的頂點數,總mesh數,和3個location(需要測試才知道作用),通過bufferWithArgs.SetData(int[] args)填充buffer
argsOffset:填充bufferWithArgs時從Int[]參數的第offset個下標開始讀,並且連續讀5個(分別對應每個mesh頂點數、mesh數、3個location),offset的作用應該是為了方便給不同次調用DrawMeshInstancedIndirect填充不同參數;
那各個mesh的transform信息呢?這個需要通過另外一個ComputerBuffer來完成:
positionBuffer = new ComputeBuffer(instanceCount, 16); Vector4[] positions = new Vector4[instanceCount]; for (int i=0; i < instanceCount; i++) { float angle = Random.Range(0.0f, Mathf.PI * 2.0f); float distance = Random.Range(20.0f, 100.0f); float height = Random.Range(-2.0f, 2.0f); float size = Random.Range(0.05f, 0.25f); positions[i] = new Vector4(Mathf.Sin(angle) * distance, height, Mathf.Cos(angle) * distance, size); } positionBuffer.SetData(positions); material.SetBuffer("positionBuffer", positionBuffer); // Render Graphics.DrawMeshInstancedIndirect( mesh, 0, material, new Bounds(Vector3.zero, new Vector3(100.0f, 100.0f, 100.0f)), argsBuffer);
注意,為了讀取到positionBuffer傳進來的數據,該材質使用的shader必須是特殊定制的
11.Graphics.DrawProcedural:Draws a fully procedural geometry on the GPU
聲明:
void DrawProcedural(MeshTopology topology, int vertexCount, int instanceCount = 1);
Procedural:程序上的
DrawProcedural does a draw call on the GPU, without any vertex or index buffers
文檔說要sm4.5才能用,因為sm4.5才給用戶任意讀取computerbuffer buffers中的數據,
這個接口和DrawMeshNow有點類似,使用當前active rt作為輸出,使用當前mat.pass作為材質渲染,但不需要輸入頂點數據(從computerbuffer讀)
It uses currently set render target, transformation matrices and currently set shader pass
要先學ComputerBuffer才能知道怎么使用,先僅止了解。
PS:
The amount of geometry to draw is read from a ComputeBuffer.
Typical use case is generating arbitrary amount of data from a ComputeShader and then rendering that, without requiring a readback to the CPU(直接使用computershader的數據進行渲染,即在gpu那邊快速完成)
是否支持:SystemInfo.supportsComputeShaders
12.DrawProceduralIndirect:和Graphics.DrawProcedural類似
13.Graphics.DrawTexture:Draw a texture in screen coordinates.
聲明:
void DrawTexture(Rect screenRect, Texture texture, Rect sourceRect, int leftBorder, int rightBorder, int topBorder, int bottomBorder, Color color, Material mat = null, int pass = -1);
screenRect是繪制到屏幕的區域,單位是像素,(0,0)是左上角;sourceRect是讀取源紋理的區域,是歸一化坐標,(0,0)代表左下角;那些border類似九宮格的原圖=》拉伸后的圖保留的不拉伸區域;color影響頂點顏色;mat用來渲染tex,null則使用默認的;pass,-1則使用mat 的 all passes
14.Graphics.ExecuteCommandBuffer:Execute a command buffer
聲明:
void ExecuteCommandBuffer(Rendering.CommandBuffer buffer);
看看commandbuffer先,可不簡單。