Android的UI控件最終在Surface上進行繪制;Surface要進行繪制,需要申請顯存,繪制,提交顯存進行顯示。
申請顯存
Android的顯存由兩個部分表示,對APP的接口體現為Surface(native/libs/gui/Surface.cpp),對graphics部分(CPU/GPU/OPENGL)體現為GraphicBuffer。
Surface說明
Surface本身有兩個含義,一個是代表UI系統的Canvas,另一個是代表本地window系統,為跨平台的OPENGL(EGL)提供接口。
UI一般基於Canvas繪制,參考UI的始祖View的draw函數:
public void draw(Canvas canvas)
所有UI控件繼承自View,都會基於Canvs來繪制自己;UI組件的draw是誰觸發的,canvas是怎么創建的?這些秘密在ViewRootImpl里面,每個Activity在setContentView之后,系統會為其創建一個ViewRootImpl對象,該對象代替Activity管理其view系統,並和window系統建立關聯(Activity的window就是在該類中創建的),並且ViewRootImpl會建立和SurfaceFlinger的連接,監聽SurfaceFlinger的VSYNC信號,一旦VSYNC信號發生,ViewRootImpl就會進入到framecallback中進行繪制。其中ViewRootImpl擁有window對應的Surface對象:
private boolean drawSoftware(Surface surface, AttachInfo attachInfo, int xoff, int yoff, boolean scalingRequired, Rect dirty, Rect surfaceInsets) { // Draw with software renderer. final Canvas canvas;try { ...... canvas = mSurface.lockCanvas(dirty); ...... if (!canvas.isOpaque() || yoff != 0 || xoff != 0) { canvas.drawColor(0, PorterDuff.Mode.CLEAR); } try { canvas.translate(-xoff, -yoff); ....... mView.draw(canvas); } finally { ...... } } finally { try { surface.unlockCanvasAndPost(canvas); } catch (IllegalArgumentException e) ...... }
對於Canvas的使用流程:
Surface.lockCanvas->View.draw(Canvas)-> Surface.unlockCanvasAndPost(Canvas)
在Surface.lockCanvas中會調用native的對象android_view_Surface.cpp->Surface.dequeueBuffer->BufferQueueProducer.dequeueBuffer得到struct ANativeWindowBuffer 的對象,其實就是一個GraphicBuffer對象,與此同時還返回了FenceID。
ANativeWindow_Buffer outBuffer; status_t err = surface->lock(&outBuffer, dirtyRectPtr); if (err < 0) { const char* const exception = (err == NO_MEMORY) ? OutOfResourcesException : "java/lang/IllegalArgumentException"; jniThrowException(env, exception, NULL); return 0; } SkImageInfo info = SkImageInfo::Make(outBuffer.width, outBuffer.height, convertPixelFormat(outBuffer.format), outBuffer.format == PIXEL_FORMAT_RGBX_8888 ? kOpaque_SkAlphaType : kPremul_SkAlphaType, GraphicsJNI::defaultColorSpace()); SkBitmap bitmap; ssize_t bpr = outBuffer.stride * bytesPerPixel(outBuffer.format); bitmap.setInfo(info, bpr); if (outBuffer.width > 0 && outBuffer.height > 0) { bitmap.setPixels(outBuffer.bits); } else { // be safe with an empty bitmap. bitmap.setPixels(NULL); } Canvas* nativeCanvas = GraphicsJNI::getNativeCanvas(env, canvasObj); nativeCanvas->setBitmap(bitmap);
由上面的lockCanvas代碼片段來看,根據ANativeWindowBuffer構建了一個SKBitmap對象,將該對象設置給nativeCanvas(SkiaCanvas),然后就返回到Java空間了。
上面提到了Canvas,我們看一下Canvas的處理流程。

大部分基於Canvas的操作最后會落到SKCanvas上面去,這個在Skia 2D庫里面。如果想搞清楚流程,可以拿TextView或者Android任意一個UI控件,看一下他的draw是怎么利用canvas API來實現的。也可以看一下skia庫實現。
提交顯存
在UI繪制完成后,需要將繪制的內容提交顯示,這里用到了Surface::unlockAndPost:
status_t Surface::unlockAndPost() { if (mLockedBuffer == 0) { ALOGE("Surface::unlockAndPost failed, no locked buffer"); return INVALID_OPERATION; } int fd = -1; status_t err = mLockedBuffer->unlockAsync(&fd); ALOGE_IF(err, "failed unlocking buffer (%p)", mLockedBuffer->handle); err = queueBuffer(mLockedBuffer.get(), fd); ALOGE_IF(err, "queueBuffer (handle=%p) failed (%s)", mLockedBuffer->handle, strerror(-err)); mPostedBuffer = mLockedBuffer; mLockedBuffer = 0; return err; }
其中主要就是將GraphicBuffer 提交到BufferQueue上等待SurfaceFlinger(comsumer)顯示出來。
GraphicBufferProducer誕生流程
BufferQueue的基本結構如下:

GraphicBuffer就與基於BufferQueueProducer產生的,在Surface.cpp里面有一個sp<IGraphicsBufferProducer> mGraphicBufferProducer;所有對GraphicBuffer的queue/dequeue/cancel等都是通過mBufferProducer產生的,我們看一下這個對象是怎么產生,誰在server端為其服務,client和server的連接是怎么建立的。
從前面Surface的說明里面我們提到一點,就是ViewRootImpl;ViewRootImpl里面的Surface為所有View的繪制提供canvas,我們看一下這個Surface是怎么創建的就能搞清楚Surface.mGraphicBufferProducer是怎么實例化的。ViewRootImpl是Activity View管理者,也是Activity對應window的創建者,在其中有幾個步驟:
create window,就是創建Activity對應的window對象,是和WMS建立通訊創建窗口
try { mOrigWindowType = mWindowAttributes.type; mAttachInfo.mRecomputeGlobalAttributes = true; collectViewAttributes(); res = mWindowSession.addToDisplay(mWindow, mSeq, mWindowAttributes, getHostVisibility(), mDisplay.getDisplayId(), mWinFrame, mAttachInfo.mContentInsets, mAttachInfo.mStableInsets, mAttachInfo.mOutsets, mAttachInfo.mDisplayCutout, mInputChannel); } catch (RemoteException e) { mAdded = false;
relayout window,測量窗口大小位置等:
int relayoutResult = mWindowSession.relayout(mWindow, mSeq, params, (int) (mView.getMeasuredWidth() * appScale + 0.5f), (int) (mView.getMeasuredHeight() * appScale + 0.5f), viewVisibility, insetsPending ? WindowManagerGlobal.RELAYOUT_INSETS_PENDING : 0, frameNumber, mWinFrame, mPendingOverscanInsets, mPendingContentInsets, mPendingVisibleInsets, mPendingStableInsets, mPendingOutsets, mPendingBackDropFrame, mPendingDisplayCutout, mPendingMergedConfiguration, mSurface);
在WMS里面上面兩個接口分別調用addWindow以及relayoutWindow;其中relayoutWindow中創建了實際的surface,也就是說實在WMS中顯示窗口的時候去創建了實際的surface,其創建過程如下:

最后是調用surface.copyFrom(SurfaceControl)得到真實的surface;SurfaceControl是在WMS里面創建的,SurfaceControl創建的時候就會向SurfaceComposerClient申請創建surface:
static jlong nativeCreate(JNIEnv* env, jclass clazz, jobject sessionObj, jstring nameStr, jint w, jint h, jint format, jint flags, jlong parentObject, jint windowType, jint ownerUid) { ScopedUtfChars name(env, nameStr); sp<SurfaceComposerClient> client(android_view_SurfaceSession_getClient(env, sessionObj)); SurfaceControl *parent = reinterpret_cast<SurfaceControl*>(parentObject); sp<SurfaceControl> surface; status_t err = client->createSurfaceChecked( String8(name.c_str()), w, h, format, &surface, flags, parent, windowType, ownerUid); if (err == NAME_NOT_FOUND) { jniThrowException(env, "java/lang/IllegalArgumentException", NULL); return 0; } else if (err != NO_ERROR) { jniThrowException(env, OutOfResourcesException, NULL); return 0; } surface->incStrong((void *)nativeCreate); return reinterpret_cast<jlong>(surface.get()); }
在createSurfaceChecked里面想surfaceFlinger申請創建Surface,並基於創建的Surface創建新的SurfaceControl。然后一步步返回,ViewRootImpl里面的Surface就具備真正的顯存了。但是我們前面是要知道GraphicBufferProducer是誰創建的,這個秘密就在ComposerSurfaceClient.createSurfaceChecked函數里面。ComposerSurfaceClient有一個成員變量mClient,這是SurfaceFlinger.Client的客戶端,通過這個mClient和SurfaceFlinger建立通訊。
status_t SurfaceComposerClient::createSurfaceChecked(...) { sp<SurfaceControl> sur; status_t err = mStatus; if (mStatus == NO_ERROR) { sp<IBinder> handle; sp<IBinder> parentHandle; sp<IGraphicBufferProducer> gbp; if (parent != nullptr) { parentHandle = parent->getHandle(); } err = mClient->createSurface(name, w, h, format, flags, parentHandle, windowType, ownerUid, &handle, &gbp); ALOGE_IF(err, "SurfaceComposerClient::createSurface error %s", strerror(-err)); if (err == NO_ERROR) { *outSurface = new SurfaceControl(this, handle, gbp, true /* owned */); } } return err; }
在這里可以看到調用了mClient的createSurface,然后返回了gbp(也就是IGraphicBufferProducer);mClient是SurfaceFlinger.Client的客戶端,由此可見GraphicBufferProducer實際是有SurfaceFlinger進程創建的。mClient和SurfaceFlinger的對象關系如下圖所示:

那么進入SurfaceFlinger看一下到底是怎么創建GraphicBufferProducer的;Client.createSurface->SurfaceFlinger.createLayer-> new BufferLayer,實際是在BufferLayer::onFirstRef里面創建的:
void BufferLayer::onFirstRef() { // Creates a custom BufferQueue for SurfaceFlingerConsumer to use sp<IGraphicBufferProducer> producer; sp<IGraphicBufferConsumer> consumer; BufferQueue::createBufferQueue(&producer, &consumer, true); mProducer = new MonitoredProducer(producer, mFlinger, this); mConsumer = new BufferLayerConsumer(consumer, mFlinger->getRenderEngine(), mTextureName, this); mConsumer->setConsumerUsageBits(getEffectiveUsage(0)); mConsumer->setContentsChangedListener(this); mConsumer->setName(mName); if (mFlinger->isLayerTripleBufferingDisabled()) { mProducer->setMaxDequeuedBufferCount(2); } const sp<const DisplayDevice> hw(mFlinger->getDefaultDisplayDevice()); updateTransformHint(hw); }
由上面的創建過程也可以看出來,Surface提交的GraphicBuffer由BufferLayerConsumer來消耗。
