< Android Camera2 HAL3 學習文檔 >


Android Camera2 HAL3 學習文檔

一、Android Camera整體架構

  • 自Android8.0之后大多機型采用Camera API2 HAL3架構,架構分層如下圖:在這里插入圖片描述

  • Android Camera整體框架主要包括三個進程:app進程、Camera Server進程、hal進程(provider進程)。進程之間的通信都是通過binder實現,其中app和Camera Server通信使用 AIDL(Android Interface Definition Language) ,camera server和hal(provider進程)通信使用HIDL(HAL interface definition language) 。

1.1 Android Camera基本分層

  • 由圖可知道,Android手機中Camera軟件主要有大體上有4層:

    1. 應用層:應用開發者調用AOSP(Android 開放源代碼項目)提供的接口即可,AOSP的接口即Android提供的相機應用的通用接口(Camera API2),這些接口將通過Binder與Framework層的相機服務(Camera Service)進行操作與數據傳遞;

    2. Framework層:位於 frameworks/av/services/camera/libcameraservice/CameraService.cpp ,Framework相機服務(Camera Service)是承上啟下的作用,上與應用交互,下與HAL曾交互。

      • frameworks/av/camera/提供了ICameraService、ICameraDeviceUser、ICameraDeviceCallbacks、ICameraServiceListener等aidl接口的實現。以及camera server的main函數。
      • AIDL基於Binder實現的一個用於讓App fw代碼訪問native fw代碼的接口。其實現存在於下述路徑:frameworks/av/camera/aidl/android/hardware。
    3. Hal層:硬件抽象層,Android 定義好了Framework服務與HAL層通信的協議及接口,HAL層如何實現有各個Vendor供應商自己實現,如Qcom高通的老架構mm-Camera,新架構Camx架構,Mtk聯發科的P之后的Hal3架構。

    4. Driver層:驅動層,數據由硬件到驅動層處理,驅動層接收HAL層數據以及傳遞Sensor數據給到HAL層,這里當然是各個Sensor芯片不同驅動也不同。

  • 將整個架構(AndroidO Treble架構)分這么多層的原因大體是為了分清界限,升級方便,高內聚低耦合,將oem定制的東西和Framework分離。參考資料:AndroidO Treble架構分析

    • Android要適應各個手機組裝廠商的不同配置,不同sensor,不管怎么換芯片,從Framework及之上都不需要改變,App也不需要改變就可以在各種配置機器上順利運行,HAL層對上的接口也由Android定義死,各個平台廠商只需要按照接口實現適合自己平台的HAL層即可。

1.2 Android Camera工作大體流程

在這里插入圖片描述

  • 上圖就是Android Camera整個工作的大體流程表觀體現,綠色框中是應用開發者需要做的操作,藍色為AOSP對其提供的API,黃色為Native Framework Service,紫色為HAL層Service。
  1. 首先App一般在MainActivity中使用SurfaceView或者SurfaceTexture+TextureView或者GLSurfaceView等控件作為顯示預覽界面的控件,這些控件的共同點都是包含了一個單獨的Surface作為取相機數據的容器。

  2. 在MainActivity onCreate的時候調用API 去通知Framework Native Service CameraServer需要進行connect HAL,繼而打開Camera硬件sensor。

  3. openCamera成功會有回調從CameraServer通知到App,再onOpenedCamera或類似回調中去調用類似startPreview的操作。此時會創建CameraCaptureSession,創建過程中會向CameraServer調用ConfigureStream的操作。ConfigureStream的參數中包含了第一步中空間中的Surface的引用,相當於App將Surface容器給到了CameraServer,CameraServer包裝了下該Surface容器變成為stream,通過HIDL傳遞給HAL,繼而HAL也做ConfigureStream操作。

  4. ConfigureStream成功后CameraServer會給App回調通知ConfigStream成功,接下來App便會調用AOS的 setRepeatingRequest接口給到CameraServer,CameraServer初始化時便起來了一個死循環線程等待來接收Request。

  5. CameraServer將request交到Hal層去處理,得到HAL處理結果后取出該Request的處理Result中的Buffer填到App給到的容器中,SetRepeatingRequest如果是預覽,則交給Preview的Surface容器;如果是Capture Request則將收到的Buffer交給ImageReader的Surface容器。

  6. Surface本質上是BufferQueue的使用者和封裝,,當CameraServer中App設置來的Surface容器被填滿了BufferQueue機制將會通知到應用,此時App中控件取出各自容器中的內容消費掉,Preview控件中的Surface中的內容將通過View提供到SurfaceFlinger中進行合成最終顯示出來,即預覽;而ImageReader中的Surface被填了,則App將會取出保存成圖片文件消費掉。

  7. 錄制視頻工作先暫時放着不研究:可參考[Android][MediaRecorder] Android MediaRecorder框架簡潔梳理

  • 上述流程可以用下圖簡單表示出來:在這里插入圖片描述

二、Android Camera各層簡述

2.1 Camera App層

  • 應用層即應用開發者關注的地方,主要就是利用AOSP提供的應用可用的組件實現用戶可見可用的相機應用,主要的接口及要點在Android官方文檔:Android 開發者/文檔/指南/相機

  • 應用層開發者需要做的就是按照AOSP的API規定提供的接口,打開相機,做基本的相機參數的設置,發送request指令,將收到的數據顯示在應用界面或保存到存儲中。

  • 應用層開發者不需要關注有手機有幾個攝像頭他們是什么牌子的,他們是怎么組合的,特定模式下哪個攝像頭是開或者是關的,他們利用AOSP提供的接口通過AIDL binder調用向Framework層的CameraServer進程下指令,從CameraServer進程中取的數據。

  • 主要一個預覽控件和拍照保存控件,基本過程都如下:

    1. openCamera:Sensor上電
    2. configureStream: 該步就是將控件如GLSurfaceView,ImageReader等中的Surface容器給到CameraServer.
    3. request: 預覽使用SetRepeatingRequest,拍一張可以使用Capture,本質都是setRequest給到CameraServer
    4. CameraServer將Request的處理結果Buffer數據填到對應的Surface容器中,填完后由BufferQueue機制回調到引用層對應的Surface控件的CallBack處理函數,接下來要顯示預覽或保圖片App中對應的Surface中都有數據了。
  • 兩個主要的類:

    • (1) CameraManager,CameraManager是一個獨一無二地用於檢測、連接和描述相機設備的系統服務,負責管理所有的CameraDevice相機設備。通過ICameraService調用到CameraService。
    • (2) CameraDevice:CameraDevice是連接在安卓設備上的單個相機的抽象表示。通過ICameraDeviceUser調用到CameraDeviceClient。CameraDevice是抽象類,CameraDeviceImpl.java繼承了CameraDevice.java,並完成了抽象方法的具體實現。
  • Camera操作過程中最重要的四個步驟:

    • CameraManager-->openCamera ---> 打開相機
    • CameraDeviceImpl-->createCaptureSession ---> 創建捕獲會話
    • CameraCaptureSession-->setRepeatingRequest ---> 設置預覽界面
    • CameraDeviceImpl-->capture ---> 開始捕獲圖片

2.2 Camera Framework層簡述

  1. Native framework:

    • 代碼路徑位於:frameworks/av/camera/。提供了ICameraService、ICameraDeviceUser、ICameraDeviceCallbacks、ICameraServiceListener等aidl接口的實現。以及camera server的main函數。
    • AIDL基於Binder實現的一個用於讓App fw代碼訪問native fw代碼的接口。其實現存在於下述路徑:frameworks/av/camera/aidl/android/hardware。
    • 其主要類:
      • (1) ICameraService 是相機服務的接口。用於請求連接、添加監聽等。
      • (2) ICameraDeviceUser 是已打開的特定相機設備的接口。應用框架可通過它訪問具體設備。
      • (3) ICameraServiceListener 和 ICameraDeviceCallbacks 分別是從 CameraService 和 CameraDevice 到應用框架的回調。
  2. Camera Service

    • 代碼路徑:frameworks/av/services/camera/。CameraServer承上啟下,向上對應用層提供Aosp的接口服務,下和Hal直接交互。一般而言,CamerServer出現問題的概率極低,大部分還是App層及HAL層出現的問題居多。

2.2.1 CameraServer初始化

2.2.2 App調用CameraServer的相關操作

2.2.2.1 open Camera:
2.2.2.2 configurestream:
2.2.2.3 preview and capture request:
2.2.2.4 flush and close:

2.3 Camera Hal3 子系統

  • Android 官方Hal3 子系統講解資料

  • Android 的相機硬件抽象層 (HAL) 可將 android.hardware.camera2 中較高級別的相機框架 API 連接到底層的相機驅動程序和硬件。Android 8.0 引入了 Treble,用於將 CameraHal API 切換到由 HAL 接口描述語言 (HIDL) 定義的穩定接口。

hal3架構

  1. 應用向相機子系統發出request,一個request對應一組結果.request中包含所有配置信息。其中包括分辨率和像素格式;手動傳感器、鏡頭和閃光燈控件;3A 操作模式;RAW 到 YUV 處理控件;以及統計信息的生成等.一次可發起多個請求,而且提交請求時不會出現阻塞。請求始終按照接收的順序進行處理。

  2. 由圖中看到request中攜帶了數據容器Surface,交到framework cameraserver中,打包成Camera3OutputStream實例,在一次CameraCaptureSession中包裝成Hal request交給HAL層處理。 Hal層獲取到處理數據后返回給CameraServer,即CaptureResult通知到Framework,Framework cameraserver則得到HAL層傳來的數據給他放進Stream中的容器Surface中。而這些Surface正是來自應用層封裝了Surface的控件,這樣App就得到了相機子系統傳來的數據。

  3. HAL3 基於captureRequest和CaptureResult來實現事件和數據的傳遞,一個Request會對應一個Result。

  4. 上述是Android原生的HAL3定義,各個芯片廠商可以根據接口來進行定制化的實現,eg. 高通的mm-camera,camx;聯發科的mtkcam hal3等。

  5. HAL3接口定義在/hardware/interfaces/camera/下。

  • HAL層的代碼梳理與架構分析放在最后 - Qcom HAL3 Camx架構學習

三、Android Camera源碼探索

3.1 Camera2 API

  • Camera api部分:frameworks/base/core/java/android/hardware/camera2

  • camera2 api關系圖:img

  • android.hardware.camera2開發包給開發者提供了一個操作相機的開發包,是api-21提供的,用於替代之前的Camera操作控類。該軟件包將攝像機設備建模為管道,它接收捕獲單個幀的輸入請求,根據請求捕獲單個圖像,然后輸出一個捕獲結果元數據包,以及一組請求的輸出圖像緩沖區。請求按順序處理,多個請求可以立即進行。由於相機設備是具有多個階段的管道,因此需要在移動中處理多個捕捉請求以便在大多數Android設備上保持完全幀率。

  • Camera2 API中主要涉及以下幾個關鍵類:

    • CameraManager:攝像頭管理器,用於打開和關閉系統攝像頭
    • CameraCharacteristics:描述攝像頭的各種特性,我們可以通過CameraManager的getCameraCharacteristics(@NonNull String cameraId)方法來獲取。
    • CameraDevice:描述系統攝像頭,類似於早期的Camera。
    • CameraCaptureSession:Session類,當需要拍照、預覽等功能時,需要先創建該類的實例,然后通過該實例里的方法進行控制(例如:拍照 capture())。
    • CaptureRequest:描述了一次操作請求,拍照、預覽等操作都需要先傳入CaptureRequest參數,具體的參數控制也是通過CameraRequest的成員變量來設置。
    • CaptureResult:描述拍照完成后的結果。
  • 如果要操作相機設備,需要獲取CameraManager實例。CameraDevices提供了一系列靜態屬性集合來描述camera設備,提供camera可供設置的屬性和設備的輸出參數。描述這些屬性的是CameraCharacteristics實例,就是CameraManager的getCameraCharacteristics(String)方法。

  • 為了捕捉或者流式話camera設備捕捉到的圖片信息,應用開發者必須創建一個CameraCaptureSession,這個camera session中包含了一系列相機設備的輸出surface集合。目標的surface一般通過SurfaceView, SurfaceTexture via Surface(SurfaceTexture), MediaCodec, MediaRecorder, Allocation, and ImageReader.

  • camera 預覽界面一般使用SurfaceView或者TextureView,,捕獲的圖片數據buffers可以通過ImageReader讀取。

    • TextureView可用於顯示內容流。這樣的內容流可以例如是視頻或OpenGL場景。內容流可以來自應用程序的進程以及遠程進程。
    • TextureView只能在硬件加速窗口中使用。在軟件中渲染時,TextureView將不會繪制任何內容。
    • TextureView不會創建單獨的窗口,但表現為常規視圖。這一關鍵差異允許TextureView移動,轉換和使用動畫
  • 之后,應用程序需要構建CaptureRequest,在捕獲單個圖片的時候,這些request請求需要定義一些請求的參數。

  • 一旦設置了請求,就可以將其傳遞到活動捕獲會話,以進行一次捕獲或無休止地重復使用。兩種方法還具有接受用作突發捕獲/重復突發的請求列表的變體。重復請求的優先級低於捕獲,因此在配置重復請求時通過capture()提交的請求將在當前重復(突發)捕獲的任何新實例開始捕獲之前捕獲。

  • 處理完請求后,攝像機設備將生成一個TotalCaptureResult對象,該對象包含有關捕獲時攝像機設備狀態的信息以及使用的最終設置。如果需要舍入或解決相互矛盾的參數,這些可能與請求有所不同。相機設備還將一幀圖像數據發送到請求中包括的每個輸出表面。這些是相對於輸出CaptureResult異步生成的,有時基本上稍晚。

  • Camera2拍照流程如下圖所示:img

    • 開發者通過創建CaptureRequest向攝像頭發起Capture請求,這些請求會排成一個隊列供攝像頭處理,攝像頭將結果包裝在CaptureMetadata中返回給開發者。整個流程建立在一個CameraCaptureSession的會話中。

3.1.1 打開相機

  • 打開相機之前,我們首先要獲取CameraManager,然后獲取相機列表,進而獲取各個攝像頭(主要是前置攝像頭和后置攝像頭)的參數。
mCameraManager = (CameraManager) mContext.getSystemService(Context.CAMERA_SERVICE);
try {
    final String[] ids = mCameraManager.getCameraIdList();
    numberOfCameras = ids.length;
    for (String id : ids) {
        final CameraCharacteristics characteristics = mCameraManager.getCameraCharacteristics(id);

        final int orientation = characteristics.get(CameraCharacteristics.LENS_FACING);
        if (orientation == CameraCharacteristics.LENS_FACING_FRONT) {
            faceFrontCameraId = id;
            faceFrontCameraOrientation = characteristics.get(CameraCharacteristics.SENSOR_ORIENTATION);
            frontCameraCharacteristics = characteristics;
        } else {
            faceBackCameraId = id;
            faceBackCameraOrientation = characteristics.get(CameraCharacteristics.SENSOR_ORIENTATION);
            backCameraCharacteristics = characteristics;
        }
    }
} catch (Exception e) {
    Log.e(TAG, "Error during camera initialize");
}

  • Camera2與Camera一樣也有cameraId的概念,我們通過mCameraManager.getCameraIdList()來獲取cameraId列表,然后通過mCameraManager.getCameraCharacteristics(id)獲取每個id對應攝像頭的參數。

  • 關於CameraCharacteristics里面的參數,主要用到的有以下幾個:

    • LENS_FACING:前置攝像頭(LENS_FACING_FRONT)還是后置攝像頭(LENS_FACING_BACK)。
    • SENSOR_ORIENTATION:攝像頭拍照方向。
    • FLASH_INFO_AVAILABLE:是否支持閃光燈。
    • CameraCharacteristics.INFO_SUPPORTED_HARDWARE_LEVEL:獲取當前設備支持的相機特性。
  • 事實上,在各個廠商的的Android設備上,Camera2的各種特性並不都是可用的,需要通過characteristics.get(CameraCharacteristics.INFO_SUPPORTED_HARDWARE_LEVEL)方法來根據返回值來獲取支持的級別,具體說來:

    • INFO_SUPPORTED_HARDWARE_LEVEL_FULL:全方位的硬件支持,允許手動控制全高清的攝像、支持連拍模式以及其他新特性。
    • INFO_SUPPORTED_HARDWARE_LEVEL_LIMITED:有限支持,這個需要單獨查詢。
    • INFO_SUPPORTED_HARDWARE_LEVEL_LEGACY:所有設備都會支持,也就是和過時的Camera API支持的特性是一致的。
  • 利用這個INFO_SUPPORTED_HARDWARE_LEVEL參數,我們可以來判斷是使用Camera還是使用Camera2,具體方法如下:

@TargetApi(Build.VERSION_CODES.LOLLIPOP)
public static boolean hasCamera2(Context mContext) {
    if (mContext == null) return false;
    if (Build.VERSION.SDK_INT < Build.VERSION_CODES.LOLLIPOP) return false;
    try {
        CameraManager manager = (CameraManager) mContext.getSystemService(Context.CAMERA_SERVICE);
        String[] idList = manager.getCameraIdList();
        boolean notFull = true;
        if (idList.length == 0) {
            notFull = false;
        } else {
            for (final String str : idList) {
                if (str == null || str.trim().isEmpty()) {
                    notFull = false;
                    break;
                }
                final CameraCharacteristics characteristics = manager.getCameraCharacteristics(str);

                final int supportLevel = characteristics.get(CameraCharacteristics.INFO_SUPPORTED_HARDWARE_LEVEL);
                if (supportLevel == CameraCharacteristics.INFO_SUPPORTED_HARDWARE_LEVEL_LEGACY) {
                    notFull = false;
                    break;
                }
            }
        }
        return notFull;
    } catch (Throwable ignore) {
        return false;
    }
}

  • 打開相機主要調用的是mCameraManager.openCamera(currentCameraId, stateCallback, backgroundHandler)方法,如你所見,它有三個參數:

    • String cameraId:攝像頭的唯一ID。
    • CameraDevice.StateCallback callback:攝像頭打開的相關回調。
    • Handler handler:StateCallback需要調用的Handler,我們一般可以用當前線程的Handler。
  • mCameraManager.openCamera(currentCameraId, stateCallback, backgroundHandler);

  • 上面我們提到了CameraDevice.StateCallback,它是攝像頭打開的一個回調,定義了打開,關閉以及出錯等各種回調方法,我們可以在 這些回調方法里做對應的操作。

private CameraDevice.StateCallback stateCallback = new CameraDevice.StateCallback() {
    @Override
    public void onOpened(@NonNull CameraDevice cameraDevice) {
        //獲取CameraDevice
        mcameraDevice = cameraDevice;
    }

    @Override
    public void onDisconnected(@NonNull CameraDevice cameraDevice) {
        //關閉CameraDevice
        cameraDevice.close();

    }

    @Override
    public void onError(@NonNull CameraDevice cameraDevice, int error) {
        //關閉CameraDevice
        cameraDevice.close();
    }
};

3.1.1 關閉相機

  • 關閉相機就直接調用cameraDevice.close();關閉CameraDevice。

3.1.2 開啟預覽

  • Camera2都是通過創建請求會話的方式進行調用的,具體說來:

    • 調用mCameraDevice.createCaptureRequest(CameraDevice.TEMPLATE_PREVIEW)方法創建CaptureRequest
    • 調用mCameraDevice.createCaptureSession()方法創建CaptureSession。
    • CaptureRequest.Builder createCaptureRequest(@RequestTemplate int templateType)
  • createCaptureRequest()方法里參數templateType代表了請求類型,請求類型一共分為六種,分別為:

    • TEMPLATE_PREVIEW:創建預覽的請求
    • TEMPLATE_STILL_CAPTURE:創建一個適合於靜態圖像捕獲的請求,圖像質量優先於幀速率。
    • TEMPLATE_RECORD:創建視頻錄制的請求
    • TEMPLATE_VIDEO_SNAPSHOT:創建視視頻錄制時截屏的請求
    • TEMPLATE_ZERO_SHUTTER_LAG:創建一個適用於零快門延遲的請求。在不影響預覽幀率的情況下最大化圖像質量。
    • TEMPLATE_MANUAL:創建一個基本捕獲請求,這種請求中所有的自動控制都是禁用的(自動曝光,自動白平衡、自動焦點)。
  • createCaptureSession(@NonNull List<Surface> outputs, @NonNull CameraCaptureSession.StateCallback callback, @Nullable Handler handler)

  • createCaptureSession()方法一共包含三個參數:

    • List outputs:我們需要輸出到的Surface列表。
    • CameraCaptureSession.StateCallback callback:會話狀態相關回調。
    • Handler handler:callback可以有多個(來自不同線程),這個handler用來區別那個callback應該被回調,一般寫當前線程的Handler即可。
  • 關於CameraCaptureSession.StateCallback里的回調方法:

    • onConfigured(@NonNull CameraCaptureSession session); 攝像頭完成配置,可以處理Capture請求了。
    • onConfigureFailed(@NonNull CameraCaptureSession session); 攝像頭配置失敗
    • onReady(@NonNull CameraCaptureSession session); 攝像頭處於就緒狀態,當前沒有請求需要處理。
    • onActive(@NonNull CameraCaptureSession session); 攝像頭正在處理請求。
    • onClosed(@NonNull CameraCaptureSession session); 會話被關閉
    • onSurfacePrepared(@NonNull CameraCaptureSession session, @NonNull Surface surface); Surface准備就緒
  • 理解了這些東西,創建預覽請求就十分簡單了。

previewRequestBuilder = mCameraDevice.createCaptureRequest(CameraDevice.TEMPLATE_PREVIEW);
previewRequestBuilder.addTarget(workingSurface);

//注意這里除了預覽的Surface,我們還添加了imageReader.getSurface()它就是負責拍照完成后用來獲取數據的
mCameraDevice.createCaptureSession(Arrays.asList(workingSurface, imageReader.getSurface()),
        new CameraCaptureSession.StateCallback() {
            @Override
            public void onConfigured(@NonNull CameraCaptureSession cameraCaptureSession) {
                cameraCaptureSession.setRepeatingRequest(previewRequest, captureCallback, backgroundHandler);
            }

            @Override
            public void onConfigureFailed(@NonNull CameraCaptureSession cameraCaptureSession) {
                Log.d(TAG, "Fail while starting preview: ");
            }
        }, null);

  • 可以發現,在onConfigured()里調用了cameraCaptureSession.setRepeatingRequest(previewRequest, captureCallback, backgroundHandler),這樣我們就可以 持續的進行預覽了。

  • 上面我們說了添加了imageReader.getSurface()它就是負責拍照完成后用來獲取數據,具體操作就是為ImageReader設置一個OnImageAvailableListener,然后在它的onImageAvailable() 方法里獲取。

mImageReader.setOnImageAvailableListener(mOnImageAvailableListener, mBackgroundHandler);

private final ImageReader.OnImageAvailableListener mOnImageAvailableListener
            = new ImageReader.OnImageAvailableListener() {

        @Override
        public void onImageAvailable(ImageReader reader) {
            //當圖片可得到的時候獲取圖片並保存
            mBackgroundHandler.post(new ImageSaver(reader.acquireNextImage(), mFile));
        }

 };

3.1.4 關閉預覽

  • 關閉預覽就是關閉當前預覽的會話,結合上面開啟預覽的內容,具體實現如下:
if (captureSession != null) {
    captureSession.close();
    try {
        captureSession.abortCaptures();
    } catch (Exception ignore) {
    } finally {
        captureSession = null;
    }
}

3.1.5 拍照

  • 拍照具體來說分為三步:
    1. 對焦

      • 定義了一個CameraCaptureSession.CaptureCallback來處理對焦請求返回的結果。
    2. 拍照

      • 定義了一個captureStillPicture()來進行拍照。
    3. 取消對焦

      • 拍完照片后,還要解鎖相機焦點,讓相機恢復到預覽狀態。
// 對焦
private CameraCaptureSession.CaptureCallback captureCallback = new CameraCaptureSession.CaptureCallback() {

    @Override
    public void onCaptureProgressed(@NonNull CameraCaptureSession session,
                                    @NonNull CaptureRequest request,
                                    @NonNull CaptureResult partialResult) {
    }

    @Override
    public void onCaptureCompleted(@NonNull CameraCaptureSession session,
                                   @NonNull CaptureRequest request,
                                   @NonNull TotalCaptureResult result) {
            //等待對焦
            final Integer afState = result.get(CaptureResult.CONTROL_AF_STATE);
            if (afState == null) {
                //對焦失敗,直接拍照
                captureStillPicture();
            } else if (CaptureResult.CONTROL_AF_STATE_FOCUSED_LOCKED == afState
                    || CaptureResult.CONTROL_AF_STATE_NOT_FOCUSED_LOCKED == afState
                    || CaptureResult.CONTROL_AF_STATE_INACTIVE == afState
                    || CaptureResult.CONTROL_AF_STATE_PASSIVE_SCAN == afState) {
                Integer aeState = result.get(CaptureResult.CONTROL_AE_STATE);
                if (aeState == null ||
                        aeState == CaptureResult.CONTROL_AE_STATE_CONVERGED) {
                    previewState = STATE_PICTURE_TAKEN;
                    //對焦完成,進行拍照
                    captureStillPicture();
                } else {
                    runPreCaptureSequence();
                }
            }
    }
};

// 拍照
private void captureStillPicture() {
    try {
        if (null == mCameraDevice) {
            return;
        }
        
        //構建用來拍照的CaptureRequest
        final CaptureRequest.Builder captureBuilder =
                mCameraDevice.createCaptureRequest(CameraDevice.TEMPLATE_STILL_CAPTURE);
        captureBuilder.addTarget(imageReader.getSurface());

        //使用相同的AR和AF模式作為預覽
        captureBuilder.set(CaptureRequest.CONTROL_AF_MODE, CaptureRequest.CONTROL_AF_MODE_CONTINUOUS_PICTURE);
        //設置方向
        captureBuilder.set(CaptureRequest.JPEG_ORIENTATION, getPhotoOrientation(mCameraConfigProvider.getSensorPosition()));

        //創建會話
        CameraCaptureSession.CaptureCallback CaptureCallback = new CameraCaptureSession.CaptureCallback() {
            @Override
            public void onCaptureCompleted(@NonNull CameraCaptureSession session,
                                           @NonNull CaptureRequest request,
                                           @NonNull TotalCaptureResult result) {
                Log.d(TAG, "onCaptureCompleted: ");
            }
        };
        //停止連續取景
        captureSession.stopRepeating();
        //捕獲照片
        captureSession.capture(captureBuilder.build(), CaptureCallback, null);

    } catch (CameraAccessException e) {
        Log.e(TAG, "Error during capturing picture");
    }
}

// 取消對焦
try {
    //重置自動對焦
    previewRequestBuilder.set(CaptureRequest.CONTROL_AF_TRIGGER, CameraMetadata.CONTROL_AF_TRIGGER_CANCEL);
    captureSession.capture(previewRequestBuilder.build(), captureCallback, backgroundHandler);
    //相機恢復正常的預覽狀態
    previewState = STATE_PREVIEW;
    //打開連續取景模式
    captureSession.setRepeatingRequest(previewRequest, captureCallback, backgroundHandler);
} catch (Exception e) {
    Log.e(TAG, "Error during focus unlocking");
}

3.1.6 開始/結束錄制視頻

  • 這部分暫時先放置不深究。

3.2 Android Camera原理之openCamera模塊(一)

  • Java端上層開發的時候只需要知道如何調度API,如何調起Camera即可,但是關於API的內部實現需要通過Framework層代碼往下一步一步步深入,本節從向上的API函數openCamera開始,展開討論camera調起之后底層是如何工作的。

3.2.1 CameraManager

  • CameraManager是本地的SystemService集合中一個service,在SystemServiceRegistry中注冊:
        registerService(Context.CAMERA_SERVICE, CameraManager.class,
                new CachedServiceFetcher<CameraManager>() {
            @Override
            public CameraManager createService(ContextImpl ctx) {
                return new CameraManager(ctx);
            }});
  • SystemServiceRegistry中有兩個HashMap集合來存儲本地的SystemService數據,有一點要注意點一些,這和Binder的service不同,他們不是binder service,只是普通的調用模塊,集成到一個本地service中,便於管理。
    private static final HashMap<Class<?>, String> SYSTEM_SERVICE_NAMES =
            new HashMap<Class<?>, String>();
    private static final HashMap<String, ServiceFetcher<?>> SYSTEM_SERVICE_FETCHERS =
            new HashMap<String, ServiceFetcher<?>>();

3.2.2 openCamera函數

  • CameraManager中兩個openCamera(...),只是一個傳入Handler(句柄),一個傳入Executor(操作線程池),是想用線程池來執行Camera中耗時操作。
    • cameraId 是一個標識,標識當前要打開的camera
    • callback 是一個狀態回調,當前camera被打開的時候,這個狀態回調會被觸發的。
    • handler 是傳入的一個執行耗時操作的handler
    • executor 操作線程池
public void openCamera(@NonNull String cameraId,
            @NonNull final CameraDevice.StateCallback callback, @Nullable Handler handler)

public void openCamera(@NonNull String cameraId,
            @NonNull @CallbackExecutor Executor executor,
            @NonNull final CameraDevice.StateCallback callback)
  • 了解一下openCamera的調用流程:img
3.2.2.1 openCameraDeviceUserAsync函數
private CameraDevice openCameraDeviceUserAsync(String cameraId,
            CameraDevice.StateCallback callback, Executor executor, final int uid)
            throws CameraAccessException
{
//......
}
  • 返回值是CameraDevice,從Camera2 API中講解了Camera framework模塊中主要類之間的關系,CameraDevice是抽象類,CameraDeviceImpl是其實現類,就是要獲取CameraDeviceImpl的實例。

  • 這個函數的主要作用就是到底層獲取相機設備的信息,並獲取當前指定cameraId的設備實例。本函數的主要工作可以分為下面五點:

    • 獲取當前cameraId指定相機的設備信息
    • 利用獲取相機的設備信息創建CameraDeviceImpl實例
    • 調用遠程CameraService獲取當前相機的遠程服務
    • 將獲取的遠程服務設置到CameraDeviceImpl實例中
    • 返回CameraDeviceImpl實例
3.2.2.2 獲取當前cameraId指定相機的設備信息
CameraCharacteristics characteristics = getCameraCharacteristics(cameraId);
  • 一句簡單的調用,返回值是CameraCharacteristics,CameraCharacteristics提供了CameraDevice的各種屬性,可以通過getCameraCharacteristics函數來查詢。
    public CameraCharacteristics getCameraCharacteristics(@NonNull String cameraId)
            throws CameraAccessException {
        CameraCharacteristics characteristics = null;
        if (CameraManagerGlobal.sCameraServiceDisabled) {
            throw new IllegalArgumentException("No cameras available on device");
        }
        synchronized (mLock) {
            ICameraService cameraService = CameraManagerGlobal.get().getCameraService();
            if (cameraService == null) {
                throw new CameraAccessException(CameraAccessException.CAMERA_DISCONNECTED,
                        "Camera service is currently unavailable");
            }
            try {
                if (!supportsCamera2ApiLocked(cameraId)) {
                    int id = Integer.parseInt(cameraId);
                    String parameters = cameraService.getLegacyParameters(id);
                    CameraInfo info = cameraService.getCameraInfo(id);
                    characteristics = LegacyMetadataMapper.createCharacteristics(parameters, info);
                } else {
                    CameraMetadataNative info = cameraService.getCameraCharacteristics(cameraId);
                    characteristics = new CameraCharacteristics(info);
                }
            } catch (ServiceSpecificException e) {
                throwAsPublicException(e);
            } catch (RemoteException e) {
                throw new CameraAccessException(CameraAccessException.CAMERA_DISCONNECTED,
                        "Camera service is currently unavailable", e);
            }
        }
        return characteristics;
    }
  • 一個關鍵的函數----> supportsCamera2ApiLocked(cameraId),這個函數的意思是 當前camera服務是否支持camera2 api,如果支持,返回true,如果不支持,返回false。
    private boolean supportsCameraApiLocked(String cameraId, int apiVersion) {
        /*
         * Possible return values:
         * - NO_ERROR => CameraX API is supported
         * - CAMERA_DEPRECATED_HAL => CameraX API is *not* supported (thrown as an exception)
         * - Remote exception => If the camera service died
         *
         * Anything else is an unexpected error we don't want to recover from.
         */
        try {
            ICameraService cameraService = CameraManagerGlobal.get().getCameraService();
            // If no camera service, no support
            if (cameraService == null) return false;

            return cameraService.supportsCameraApi(cameraId, apiVersion);
        } catch (RemoteException e) {
            // Camera service is now down, no support for any API level
        }
        return false;
    }
  • 調用的CameraService對應的是ICameraService.aidl,對應的實現類在frameworks/av/services/camera/libcameraservice/CameraService.h里。

  • 下面是CameraManager與CameraService之間的連接關系圖示:img

  • CameraManagerGlobal是CameraManager中的內部類,服務端在native層,Camera2API介紹的時候已經說明了當前cameraservice是放在frameworks/av/services/camera/libcameraservice/中的,編譯好了之后會生成一個libcameraservices.so的共享庫。熟悉camera代碼,首先應該熟悉camera架構的代碼。

  • 這兒監測的是當前camera架構是基於HAL什么版本的,看下面的switch判斷:

    • 當前device是基於HAL1.0 HAL3.0 HAL3.1,並且apiversion不是API_VERSION_2,此時支持,這里要搞清楚了,這里的API_VERSION_2不是api level 2,而是camera1還是camera2。
    • 當前device是基於HAL3.2 HAL3.3 HAL3.4,此時支持
    • 目前android版本,正常情況下都是支持camera2的
Status CameraService::supportsCameraApi(const String16& cameraId, int apiVersion,
        /*out*/ bool *isSupported) {
    ATRACE_CALL();

    const String8 id = String8(cameraId);

    ALOGV("%s: for camera ID = %s", __FUNCTION__, id.string());

    switch (apiVersion) {
        case API_VERSION_1:
        case API_VERSION_2:
            break;
        default:
            String8 msg = String8::format("Unknown API version %d", apiVersion);
            ALOGE("%s: %s", __FUNCTION__, msg.string());
            return STATUS_ERROR(ERROR_ILLEGAL_ARGUMENT, msg.string());
    }

    int deviceVersion = getDeviceVersion(id);
    switch(deviceVersion) {
        case CAMERA_DEVICE_API_VERSION_1_0:
        case CAMERA_DEVICE_API_VERSION_3_0:
        case CAMERA_DEVICE_API_VERSION_3_1:
            if (apiVersion == API_VERSION_2) {
                ALOGV("%s: Camera id %s uses HAL version %d <3.2, doesn't support api2 without shim",
                        __FUNCTION__, id.string(), deviceVersion);
                *isSupported = false;
            } else { // if (apiVersion == API_VERSION_1) {
                ALOGV("%s: Camera id %s uses older HAL before 3.2, but api1 is always supported",
                        __FUNCTION__, id.string());
                *isSupported = true;
            }
            break;
        case CAMERA_DEVICE_API_VERSION_3_2:
        case CAMERA_DEVICE_API_VERSION_3_3:
        case CAMERA_DEVICE_API_VERSION_3_4:
            ALOGV("%s: Camera id %s uses HAL3.2 or newer, supports api1/api2 directly",
                    __FUNCTION__, id.string());
            *isSupported = true;
            break;
        case -1: {
            String8 msg = String8::format("Unknown camera ID %s", id.string());
            ALOGE("%s: %s", __FUNCTION__, msg.string());
            return STATUS_ERROR(ERROR_ILLEGAL_ARGUMENT, msg.string());
        }
        default: {
            String8 msg = String8::format("Unknown device version %x for device %s",
                    deviceVersion, id.string());
            ALOGE("%s: %s", __FUNCTION__, msg.string());
            return STATUS_ERROR(ERROR_INVALID_OPERATION, msg.string());
        }
    }

    return Status::ok();
}
  • 采用camera2 api來獲取相機設備的信息。
CameraMetadataNative info = cameraService.getCameraCharacteristics(cameraId);
characteristics = new CameraCharacteristics(info);

getCameraCharacteristics調用流程

  • 其中DeviceInfo3是CameraProviderManager::ProviderInfo::DeviceInfo3,CameraProviderManager中的結構體,最終返回的是CameraMetadata類型,它是一個Parcelable類型,native中對應的代碼是frameworks/av/camera/include/camera/CameraMetadata.h,java中對應的是frameworks/base/core/java/android/hardware/camera2/impl/CameraMetadataNative.java,Parcelable類型是可以跨進程傳輸的。下面是在native中定義CameraMetadata為CameraMetadataNative。
namespace hardware {
namespace camera2 {
namespace impl {
using ::android::CameraMetadata;
typedef CameraMetadata CameraMetadataNative;
}
}
}
  • 我們關注其中的一個調用函數:
status_t CameraProviderManager::getCameraCharacteristicsLocked(const std::string &id,
        CameraMetadata* characteristics) const {
    auto deviceInfo = findDeviceInfoLocked(id, /*minVersion*/ {3,0}, /*maxVersion*/ {4,0});
    if (deviceInfo == nullptr) return NAME_NOT_FOUND;

    return deviceInfo->getCameraCharacteristics(characteristics);
}	
  • 發現了調用了一個findDeviceInfoLocked(...)函數,返回類型是一個DeviceInfo結構體,CameraProviderManager.h中定義了三個DeviceInfo結構體,除了DeviceInfo之外,還有DeviceInfo1與DeviceInfo3,他們都繼承DeviceInfo,其中DeviceInfo1為HALv1服務,DeviceInfo3為HALv3-specific服務,都是提供camera device一些基本信息。這里主要看下findDeviceInfoLocked(...)函數:
CameraProviderManager::ProviderInfo::DeviceInfo* CameraProviderManager::findDeviceInfoLocked(
        const std::string& id,
        hardware::hidl_version minVersion, hardware::hidl_version maxVersion) const {
    for (auto& provider : mProviders) {
        for (auto& deviceInfo : provider->mDevices) {
            if (deviceInfo->mId == id &&
                    minVersion <= deviceInfo->mVersion && maxVersion >= deviceInfo->mVersion) {
                return deviceInfo.get();
            }
        }
    }
    return nullptr;
}
  • 這兒的是mProviders是ProviderInfo類型的列表,這個ProviderInfo也是CameraProviderManager.h中定義的結構體,並且上面3種DeviceInfo都是定義在ProviderInfo里面的。下面給出了ProviderInfo的代碼大綱,裁剪了很多代碼,但是我們還是能看到核心的代碼:ProviderInfo是管理當前手機的camera device設備的,通過addDevice保存在mDevices中,接下來我們看下這個addDevice是如何工作的。
    struct ProviderInfo :
            virtual public hardware::camera::provider::V2_4::ICameraProviderCallback,
            virtual public hardware::hidl_death_recipient
    {
    //......
        ProviderInfo(const std::string &providerName,
                sp<hardware::camera::provider::V2_4::ICameraProvider>& interface,
                CameraProviderManager *manager);
        ~ProviderInfo();

        status_t initialize();

        const std::string& getType() const;

        status_t addDevice(const std::string& name,
                hardware::camera::common::V1_0::CameraDeviceStatus initialStatus =
                hardware::camera::common::V1_0::CameraDeviceStatus::PRESENT,
                /*out*/ std::string *parsedId = nullptr);

        // ICameraProviderCallbacks interface - these lock the parent mInterfaceMutex
        virtual hardware::Return<void> cameraDeviceStatusChange(
                const hardware::hidl_string& cameraDeviceName,
                hardware::camera::common::V1_0::CameraDeviceStatus newStatus) override;
        virtual hardware::Return<void> torchModeStatusChange(
                const hardware::hidl_string& cameraDeviceName,
                hardware::camera::common::V1_0::TorchModeStatus newStatus) override;

        // hidl_death_recipient interface - this locks the parent mInterfaceMutex
        virtual void serviceDied(uint64_t cookie, const wp<hidl::base::V1_0::IBase>& who) override;

        // Basic device information, common to all camera devices
        struct DeviceInfo {
            //......
        };
        std::vector<std::unique_ptr<DeviceInfo>> mDevices;
        std::unordered_set<std::string> mUniqueCameraIds;
        int mUniqueDeviceCount;

        // HALv1-specific camera fields, including the actual device interface
        struct DeviceInfo1 : public DeviceInfo {
            //......
        };

        // HALv3-specific camera fields, including the actual device interface
        struct DeviceInfo3 : public DeviceInfo {
            //......
        };

    private:
        void removeDevice(std::string id);
    };
  • mProviders是如何添加的?
    • 1.CameraService --> onFirstRef()

    • 2.CameraService --> enumerateProviders()

    • 3.CameraProviderManager --> initialize(this)

    • initialize(...)函數原型是:status_t initialize(wp<StatusListener> listener,ServiceInteractionProxy *proxy = &sHardwareServiceInteractionProxy);

      • 第2個參數默認是sHardwareServiceInteractionProxy類型:
    • hardware::Camera::provider::V2_4::ICameraProvider::getService(serviceName)出處

    • 在./hardware/interfaces/camera/provider/2.4/default/CameraProvider.cpp,傳入的參數可能是下面兩種的一個:

      • const std::string kLegacyProviderName("legacy/0"); 代表 HALv1
      • const std::string kExternalProviderName("external/0"); 代碼HALv3-specific
// 第2個參數默認是sHardwareServiceInteractionProxy類型:
    struct ServiceInteractionProxy {
        virtual bool registerForNotifications(
                const std::string &serviceName,
                const sp<hidl::manager::V1_0::IServiceNotification>
                &notification) = 0;
        virtual sp<hardware::camera::provider::V2_4::ICameraProvider> getService(
                const std::string &serviceName) = 0;
        virtual ~ServiceInteractionProxy() {}
    };

    // Standard use case - call into the normal generated static methods which invoke
    // the real hardware service manager
    struct HardwareServiceInteractionProxy : public ServiceInteractionProxy {
        virtual bool registerForNotifications(
                const std::string &serviceName,
                const sp<hidl::manager::V1_0::IServiceNotification>
                &notification) override {
            return hardware::camera::provider::V2_4::ICameraProvider::registerForNotifications(
                    serviceName, notification);
        }
        virtual sp<hardware::camera::provider::V2_4::ICameraProvider> getService(
                const std::string &serviceName) override {
            return hardware::camera::provider::V2_4::ICameraProvider::getService(serviceName);
        }
    };
// ICameraProvider
ICameraProvider* HIDL_FETCH_ICameraProvider(const char* name) {
    if (strcmp(name, kLegacyProviderName) == 0) {
        CameraProvider* provider = new CameraProvider();
        if (provider == nullptr) {
            ALOGE("%s: cannot allocate camera provider!", __FUNCTION__);
            return nullptr;
        }
        if (provider->isInitFailed()) {
            ALOGE("%s: camera provider init failed!", __FUNCTION__);
            delete provider;
            return nullptr;
        }
        return provider;
    } else if (strcmp(name, kExternalProviderName) == 0) {
        ExternalCameraProvider* provider = new ExternalCameraProvider();
        return provider;
    }
    ALOGE("%s: unknown instance name: %s", __FUNCTION__, name);
    return nullptr;
}
  • addDevice是如何工作的?
    • 1.CameraProviderManager::ProviderInfo::initialize()初始化的時候是檢查當前的camera device,檢查的執行函數是:(1)
    • 最終調用到./hardware/interfaces/camera/provider/2.4/default/CameraProvider.cpp中的getCameraIdList函數:CAMERA_DEVICE_STATUS_PRESENT表明當前的camera是可用的,mCameraStatusMap存儲了所有的camera 設備列表。(2)
// (1)
	    std::vector<std::string> devices;
    hardware::Return<void> ret = mInterface->getCameraIdList([&status, &devices](
            Status idStatus,
            const hardware::hidl_vec<hardware::hidl_string>& cameraDeviceNames) {
        status = idStatus;
        if (status == Status::OK) {
            for (size_t i = 0; i < cameraDeviceNames.size(); i++) {
                devices.push_back(cameraDeviceNames[i]);
            }
        } });
        
// (2)
Return<void> CameraProvider::getCameraIdList(getCameraIdList_cb _hidl_cb)  {
    std::vector<hidl_string> deviceNameList;
    for (auto const& deviceNamePair : mCameraDeviceNames) {
        if (mCameraStatusMap[deviceNamePair.first] == CAMERA_DEVICE_STATUS_PRESENT) {
            deviceNameList.push_back(deviceNamePair.second);
        }
    }
    hidl_vec<hidl_string> hidlDeviceNameList(deviceNameList);
    _hidl_cb(Status::OK, hidlDeviceNameList);
    return Void();
}
  • 我們理一下整體的調用結構:Camera分層體系
    • 1.上面談的camera2 api就是在framework層的,在應用程序進程中。
    • 2.CameraService,是camera2 api binder IPC通信方式調用到服務端的,camera相關的操作都在在服務端進行。所在的位置就是./frameworks/av/services/camera/下面
    • 3.服務端也只是一個橋梁,service也會調用到HAL,硬件抽象層,具體位置在./hardware/interfaces/camera/provider/2.4
    • 4.camera driver,底層的驅動層了,這是真正操作硬件的地方。
3.2.2.3 利用獲取相機的設備信息創建CameraDeviceImpl實例
android.hardware.camera2.impl.CameraDeviceImpl deviceImpl =
                    new android.hardware.camera2.impl.CameraDeviceImpl(
                        cameraId,
                        callback,
                        executor,
                        characteristics,
                        mContext.getApplicationInfo().targetSdkVersion);
  • 創建CameraDevice實例,傳入了剛剛獲取的characteristics參數(camera設備信息賦值為CameraDevice實例)。這個實例接下來還會使用,使用的時候再談一下。
3.2.2.4 調用遠程CameraService獲取當前相機的遠程服務
                    // Use cameraservice's cameradeviceclient implementation for HAL3.2+ devices
                    ICameraService cameraService = CameraManagerGlobal.get().getCameraService();
                    if (cameraService == null) {
                        throw new ServiceSpecificException(
                            ICameraService.ERROR_DISCONNECTED,
                            "Camera service is currently unavailable");
                    }
                    cameraUser = cameraService.connectDevice(callbacks, cameraId,
                            mContext.getOpPackageName(), uid);
  • 這個函數的主要目的就是連接當前的cameraDevice設備。調用到CameraService::connectDevice中。connectDevice調用流程
Status CameraService::connectDevice(
        const sp<hardware::camera2::ICameraDeviceCallbacks>& cameraCb,
        const String16& cameraId,
        const String16& clientPackageName,
        int clientUid,
        /*out*/
        sp<hardware::camera2::ICameraDeviceUser>* device) {

    ATRACE_CALL();
    Status ret = Status::ok();
    String8 id = String8(cameraId);
    sp<CameraDeviceClient> client = nullptr;
    ret = connectHelper<hardware::camera2::ICameraDeviceCallbacks,CameraDeviceClient>(cameraCb, id,
            /*api1CameraId*/-1,
            CAMERA_HAL_API_VERSION_UNSPECIFIED, clientPackageName,
            clientUid, USE_CALLING_PID, API_2,
            /*legacyMode*/ false, /*shimUpdateOnly*/ false,
            /*out*/client);

    if(!ret.isOk()) {
        logRejected(id, getCallingPid(), String8(clientPackageName),
                ret.toString8());
        return ret;
    }

    *device = client;
    return ret;
}
    • connectDevice函數的第5個參數就是當前binder ipc的返回值,我們connectDevice之后,會得到一個cameraDeviceClient對象,這個對象會返回到應用程序進程中。我們接下來主要看看這個對象是如何生成的。
  • validateConnectLocked:檢查當前的camera device是否可用,這兒的判斷比較簡單,只是簡單判斷當前設備是否存在。
  • handleEvictionsLocked:處理camera獨占情況,主要的工作是當前的cameradevice如果已經被其他的設備使用了,或者是否有比當前調用優先級更高的調用等等,在執行完這個函數之后,才能完全判斷當前的camera device是可用的,並且開始獲取camera device的一些信息開始工作了。
  • CameraFlashlight-->prepareDeviceOpen:此時准備連接camera device 了,需要判斷一下如果當前的camera device有可用的flashlight,那就要開始准備好了,但是flashlight被占用的那就沒有辦法了。只是一個通知作用。
  • getDeviceVersion:判斷一下當前的camera device的version 版本,主要判斷在CameraProviderManager::getHighestSupportedVersion函數中,這個函數中將camera device支持的最高和最低版本查清楚,然后我們判斷當前的camera facing,只有兩種情況CAMERA_FACING_BACK = 0與CAMERA_FACING_FRONT = 1,這些都是先置判斷條件,只有這些檢查都通過,說明當前camera device是確實可用的。
  • makeClient:是根據當前的CAMERA_DEVICE_API_VERSION來判斷的,當前最新的HAL架構都是基於HALv3的,所以我們采用的client都是CameraDeviceClient。
Status CameraService::makeClient(const sp<CameraService>& cameraService,
        const sp<IInterface>& cameraCb, const String16& packageName, const String8& cameraId,
        int api1CameraId, int facing, int clientPid, uid_t clientUid, int servicePid,
        bool legacyMode, int halVersion, int deviceVersion, apiLevel effectiveApiLevel,
        /*out*/sp<BasicClient>* client) {

    if (halVersion < 0 || halVersion == deviceVersion) {
        // Default path: HAL version is unspecified by caller, create CameraClient
        // based on device version reported by the HAL.
        switch(deviceVersion) {
          case CAMERA_DEVICE_API_VERSION_1_0:
            if (effectiveApiLevel == API_1) {  // Camera1 API route
                sp<ICameraClient> tmp = static_cast<ICameraClient*>(cameraCb.get());
                *client = new CameraClient(cameraService, tmp, packageName,
                        api1CameraId, facing, clientPid, clientUid,
                        getpid(), legacyMode);
            } else { // Camera2 API route
                ALOGW("Camera using old HAL version: %d", deviceVersion);
                return STATUS_ERROR_FMT(ERROR_DEPRECATED_HAL,
                        "Camera device \"%s\" HAL version %d does not support camera2 API",
                        cameraId.string(), deviceVersion);
            }
            break;
          case CAMERA_DEVICE_API_VERSION_3_0:
          case CAMERA_DEVICE_API_VERSION_3_1:
          case CAMERA_DEVICE_API_VERSION_3_2:
          case CAMERA_DEVICE_API_VERSION_3_3:
          case CAMERA_DEVICE_API_VERSION_3_4:
            if (effectiveApiLevel == API_1) { // Camera1 API route
                sp<ICameraClient> tmp = static_cast<ICameraClient*>(cameraCb.get());
                *client = new Camera2Client(cameraService, tmp, packageName,
                        cameraId, api1CameraId,
                        facing, clientPid, clientUid,
                        servicePid, legacyMode);
            } else { // Camera2 API route
                sp<hardware::camera2::ICameraDeviceCallbacks> tmp =
                        static_cast<hardware::camera2::ICameraDeviceCallbacks*>(cameraCb.get());
                *client = new CameraDeviceClient(cameraService, tmp, packageName, cameraId,
                        facing, clientPid, clientUid, servicePid);
            }
            break;
          default:
            // Should not be reachable
            ALOGE("Unknown camera device HAL version: %d", deviceVersion);
            return STATUS_ERROR_FMT(ERROR_INVALID_OPERATION,
                    "Camera device \"%s\" has unknown HAL version %d",
                    cameraId.string(), deviceVersion);
        }
    } else {
        // A particular HAL version is requested by caller. Create CameraClient
        // based on the requested HAL version.
        if (deviceVersion > CAMERA_DEVICE_API_VERSION_1_0 &&
            halVersion == CAMERA_DEVICE_API_VERSION_1_0) {
            // Only support higher HAL version device opened as HAL1.0 device.
            sp<ICameraClient> tmp = static_cast<ICameraClient*>(cameraCb.get());
            *client = new CameraClient(cameraService, tmp, packageName,
                    api1CameraId, facing, clientPid, clientUid,
                    servicePid, legacyMode);
        } else {
            // Other combinations (e.g. HAL3.x open as HAL2.x) are not supported yet.
            ALOGE("Invalid camera HAL version %x: HAL %x device can only be"
                    " opened as HAL %x device", halVersion, deviceVersion,
                    CAMERA_DEVICE_API_VERSION_1_0);
            return STATUS_ERROR_FMT(ERROR_ILLEGAL_ARGUMENT,
                    "Camera device \"%s\" (HAL version %d) cannot be opened as HAL version %d",
                    cameraId.string(), deviceVersion, halVersion);
        }
    }
    return Status::ok();
}

img

  • CameraClient與Camera2Client是之前系統版本使用的camera client對象,現在都使用CameraDeviceClient了
    • BnCamera --> ./frameworks/av/camera/include/camera/android/hardware/ICamera.h
    • ICamera --> ./frameworks/av/camera/include/camera/android/hardware/ICamera.h
    • BnCameraDeviceUser --> android/hardware/camera2/BnCameraDeviceUser.h 這是ICameraDeviceUser.aidl自動生成的binder 對象。所以最終得到的client對象就是ICameraDeviceUser.Stub對象。
3.2.2.5 將獲取的遠程服務設置到CameraDeviceImpl實例中
deviceImpl.setRemoteDevice(cameraUser);
device = deviceImpl;
  • 這個cameraUser就是cameraservice端設置的ICameraDeviceUser.Stub對象:
    public void setRemoteDevice(ICameraDeviceUser remoteDevice) throws CameraAccessException {
        synchronized(mInterfaceLock) {
            // TODO: Move from decorator to direct binder-mediated exceptions
            // If setRemoteFailure already called, do nothing
            if (mInError) return;

            mRemoteDevice = new ICameraDeviceUserWrapper(remoteDevice);

            IBinder remoteDeviceBinder = remoteDevice.asBinder();
            // For legacy camera device, remoteDevice is in the same process, and
            // asBinder returns NULL.
            if (remoteDeviceBinder != null) {
                try {
                    remoteDeviceBinder.linkToDeath(this, /*flag*/ 0);
                } catch (RemoteException e) {
                    CameraDeviceImpl.this.mDeviceExecutor.execute(mCallOnDisconnected);

                    throw new CameraAccessException(CameraAccessException.CAMERA_DISCONNECTED,
                            "The camera device has encountered a serious error");
                }
            }

            mDeviceExecutor.execute(mCallOnOpened);
            mDeviceExecutor.execute(mCallOnUnconfigured);
        }
    }
  • 這個mRemoteDevice是應用程序進程和android camera service端之間鏈接的橋梁,上層操作camera的方法會通過調用mRemoteDevice來調用到camera service端來實現操作底層camera驅動的目的。

3.2.3 小結

  • 本節通過我們熟知的openCamera函數講起,openCamera串起應用程序和cameraService之間的聯系,通過研究cameraservice代碼,我們知道了底層是如何通過HAL調用camera驅動設備的。下面會逐漸深入講解camera底層知識,不足之處,敬請諒解。

3.3 Android Camera原理之openCamera模塊(二)

  • 在openCamera模塊(一)中主要講解了openCamera的調用流程以及camera模塊涉及到的4個層次之間的調用關系,但是一些細節問題並沒有闡釋到,本節補充一下細節問題,力求豐滿整個openCamera模塊的知識體系。
  • 在API一節中談到了調用openCamera的方法:manager.openCamera(mCameraId, mStateCallback, mBackgroundHandler);
  • 其中這個manager就是CameraManager實例,openCamera方法上一篇文章已經介紹地比較清楚了,但是第二個參數mStateCallback沒有深入講解,大家只知道是一個相機狀態的回調,但是這個狀態很重要。這個狀態回調會告知開發者當前的camera處於什么狀態,在確切獲得這個狀態之后,才能進行下一步的操作。例如我打開camera是成功還是失敗了,如果不知道的話是不能進行下一步的操作的。
    private final CameraDevice.StateCallback mStateCallback = new CameraDevice.StateCallback() {

        @Override
        public void onOpened(@NonNull CameraDevice cameraDevice) {
            // This method is called when the camera is opened.  We start camera preview here.
            mCameraOpenCloseLock.release();
            mCameraDevice = cameraDevice;
            createCameraPreviewSession();
        }

        @Override
        public void onDisconnected(@NonNull CameraDevice cameraDevice) {
            mCameraOpenCloseLock.release();
            cameraDevice.close();
            mCameraDevice = null;
        }

        @Override
        public void onError(@NonNull CameraDevice cameraDevice, int error) {
            mCameraOpenCloseLock.release();
            cameraDevice.close();
            mCameraDevice = null;
            Activity activity = getActivity();
            if (null != activity) {
                activity.finish();
            }
        }

    };
  • openCamer(二)想要探討的是Camera狀態是如何回調的。(在上節中已經談到了詳細的調用流程,不再贅述)

3.2.1 openCameraDeviceUserAsync中StateCallback

  • openCamera會調用到openCameraDeviceUserAsync(...)中,當然也會把它的StateCallback參數傳進來,這個參數和獲取到的CameraCharacteristics一起傳入CameraDeviceImpl的構造函數中。
android.hardware.camera2.impl.CameraDeviceImpl deviceImpl =
                    new android.hardware.camera2.impl.CameraDeviceImpl(
                        cameraId,
                        callback,
                        executor,
                        characteristics,
                        mContext.getApplicationInfo().targetSdkVersion);
  • 但是傳入cameraService的回調參數卻不是這個回調,看一下代碼:
ICameraDeviceCallbacks callbacks = deviceImpl.getCallbacks();
//......
cameraUser = cameraService.connectDevice(callbacks, cameraId,
                            mContext.getOpPackageName(), uid);
  • 這個callbacks是CameraDeviceImpl實例中的參數,那么這個callbacks和我們傳入的StateCallback有什么關系了,還是要去CameraDeviceImpl中看。
    public CameraDeviceImpl(String cameraId, StateCallback callback, Executor executor,
                        CameraCharacteristics characteristics, int appTargetSdkVersion) {
        if (cameraId == null || callback == null || executor == null || characteristics == null) {
            throw new IllegalArgumentException("Null argument given");
        }
        mCameraId = cameraId;
        mDeviceCallback = callback;
        mDeviceExecutor = executor;
        mCharacteristics = characteristics;
        mAppTargetSdkVersion = appTargetSdkVersion;

        final int MAX_TAG_LEN = 23;
        String tag = String.format("CameraDevice-JV-%s", mCameraId);
        if (tag.length() > MAX_TAG_LEN) {
            tag = tag.substring(0, MAX_TAG_LEN);
        }
        TAG = tag;

        Integer partialCount =
                mCharacteristics.get(CameraCharacteristics.REQUEST_PARTIAL_RESULT_COUNT);
        if (partialCount == null) {
            // 1 means partial result is not supported.
            mTotalPartialCount = 1;
        } else {
            mTotalPartialCount = partialCount;
        }
    }
  • 構造函數中傳入的StateCallback賦給了CameraDeviceImpl中的mDeviceCallback
    private final CameraDeviceCallbacks mCallbacks = new CameraDeviceCallbacks();
    public CameraDeviceCallbacks getCallbacks() {
        return mCallbacks;
    }
    public class CameraDeviceCallbacks extends ICameraDeviceCallbacks.Stub {
    //......
    }

3.2.2 CameraDeviceCallbacks回調

  • ICameraDeviceCallbacks.aidl自動生成的android/hardware/camera2/ICameraDeviceCallbacks.h文件
class ICameraDeviceCallbacksDefault : public ICameraDeviceCallbacks {
public:
  ::android::IBinder* onAsBinder() override;
  ::android::binder::Status onDeviceError(int32_t errorCode, const ::android::hardware::camera2::impl::CaptureResultExtras& resultExtras) override;
  ::android::binder::Status onDeviceIdle() override;
  ::android::binder::Status onCaptureStarted(const ::android::hardware::camera2::impl::CaptureResultExtras& resultExtras, int64_t timestamp) override;
  ::android::binder::Status onResultReceived(const ::android::hardware::camera2::impl::CameraMetadataNative& result, const ::android::hardware::camera2::impl::CaptureResultExtras& resultExtras, const ::std::vector<::android::hardware::camera2::impl::PhysicalCaptureResultInfo>& physicalCaptureResultInfos) override;
  ::android::binder::Status onPrepared(int32_t streamId) override;
  ::android::binder::Status onRepeatingRequestError(int64_t lastFrameNumber, int32_t repeatingRequestId) override;
  ::android::binder::Status onRequestQueueEmpty() override;

};
  • 這個回調函數是從CameraService中調上來的。下面的回調包含了Camera執行過程中的各種狀態,執行成功、執行失敗、數據接收成功等等。這兒暫時不展開描述,等后面capture image的時候會詳細闡釋。
    • onDeviceError
    • onDeviceIdle
    • onCaptureStarted
    • onResultReceived
    • onPrepared
    • onRepeatingRequestError
    • onRequestQueueEmpty

3.2.3 StateCallback回調

  • StateCallback是openCamera傳入的3個參數中的一個,這是一個標識當前camera連接狀態的回調。
    public static abstract class StateCallback {
        public static final int ERROR_CAMERA_IN_USE = 1;
        public static final int ERROR_MAX_CAMERAS_IN_USE = 2;
        public static final int ERROR_CAMERA_DISABLED = 3;
        public static final int ERROR_CAMERA_DEVICE = 4;
        public static final int ERROR_CAMERA_SERVICE = 5;

        /** @hide */
        @Retention(RetentionPolicy.SOURCE)
        @IntDef(prefix = {"ERROR_"}, value =
            {ERROR_CAMERA_IN_USE,
             ERROR_MAX_CAMERAS_IN_USE,
             ERROR_CAMERA_DISABLED,
             ERROR_CAMERA_DEVICE,
             ERROR_CAMERA_SERVICE })
        public @interface ErrorCode {};
        public abstract void onOpened(@NonNull CameraDevice camera); // Must implement
        public void onClosed(@NonNull CameraDevice camera) {
            // Default empty implementation
        }
        public abstract void onDisconnected(@NonNull CameraDevice camera); // Must implement
        public abstract void onError(@NonNull CameraDevice camera,
                @ErrorCode int error); // Must implement
    }
  • onOpened回調:
    • 當前camera device已經被打開了,會觸發這個回調。探明camera的狀態是opened了,這是可以開始createCaptureSession開始使用camera 捕捉圖片或者視頻了。
  • 觸發onOpened回調的地方在setRemoteDevice(...),這個函數在connectDevice(...)連接成功之后執行,表明當前的camera device已經連接成功了,觸發camera 能夠打開的回調。
    public void setRemoteDevice(ICameraDeviceUser remoteDevice) throws CameraAccessException {
        synchronized(mInterfaceLock) {
            // TODO: Move from decorator to direct binder-mediated exceptions
            // If setRemoteFailure already called, do nothing
            if (mInError) return;

            mRemoteDevice = new ICameraDeviceUserWrapper(remoteDevice);

            IBinder remoteDeviceBinder = remoteDevice.asBinder();
            // For legacy camera device, remoteDevice is in the same process, and
            // asBinder returns NULL.
            if (remoteDeviceBinder != null) {
                try {
                    remoteDeviceBinder.linkToDeath(this, /*flag*/ 0);
                } catch (RemoteException e) {
                    CameraDeviceImpl.this.mDeviceExecutor.execute(mCallOnDisconnected);

                    throw new CameraAccessException(CameraAccessException.CAMERA_DISCONNECTED,
                            "The camera device has encountered a serious error");
                }
            }

            mDeviceExecutor.execute(mCallOnOpened);
            mDeviceExecutor.execute(mCallOnUnconfigured);
        }
    }

    private final Runnable mCallOnOpened = new Runnable() {
        @Override
        public void run() {
            StateCallbackKK sessionCallback = null;
            synchronized(mInterfaceLock) {
                if (mRemoteDevice == null) return; // Camera already closed

                sessionCallback = mSessionStateCallback;
            }
            if (sessionCallback != null) {
                sessionCallback.onOpened(CameraDeviceImpl.this);
            }
            mDeviceCallback.onOpened(CameraDeviceImpl.this);
        }
    };
  • onClosed回調:
    • camera device已經被關閉,這個回調被觸發。一般是終端開發者closeCamera的時候會釋放當前持有的camera device,這是正常的現象。
    public void close() {
        synchronized (mInterfaceLock) {
            if (mClosing.getAndSet(true)) {
                return;
            }

            if (mRemoteDevice != null) {
                mRemoteDevice.disconnect();
                mRemoteDevice.unlinkToDeath(this, /*flags*/0);
            }
            if (mRemoteDevice != null || mInError) {
                mDeviceExecutor.execute(mCallOnClosed);
            }

            mRemoteDevice = null;
        }
    }
    private final Runnable mCallOnClosed = new Runnable() {
        private boolean mClosedOnce = false;

        @Override
        public void run() {
            if (mClosedOnce) {
                throw new AssertionError("Don't post #onClosed more than once");
            }
            StateCallbackKK sessionCallback = null;
            synchronized(mInterfaceLock) {
                sessionCallback = mSessionStateCallback;
            }
            if (sessionCallback != null) {
                sessionCallback.onClosed(CameraDeviceImpl.this);
            }
            mDeviceCallback.onClosed(CameraDeviceImpl.this);
            mClosedOnce = true;
        }
    };
  • onDisconnected回調:
    • camera device不再可用,打開camera device失敗了,一般是因為權限或者安全策略問題導致camera device打不開。一旦連接camera device出現ERROR_CAMERA_DISCONNECTED問題了,這是函數就會被回調,表示當前camera device處於斷開的狀態。
  • onError回調:
    • 調用camera device的時候出現了嚴重的問題。執行CameraService-->connectDevice 出現異常了
            } catch (ServiceSpecificException e) {
                if (e.errorCode == ICameraService.ERROR_DEPRECATED_HAL) {
                    throw new AssertionError("Should've gone down the shim path");
                } else if (e.errorCode == ICameraService.ERROR_CAMERA_IN_USE ||
                        e.errorCode == ICameraService.ERROR_MAX_CAMERAS_IN_USE ||
                        e.errorCode == ICameraService.ERROR_DISABLED ||
                        e.errorCode == ICameraService.ERROR_DISCONNECTED ||
                        e.errorCode == ICameraService.ERROR_INVALID_OPERATION) {
                    // Received one of the known connection errors
                    // The remote camera device cannot be connected to, so
                    // set the local camera to the startup error state
                    deviceImpl.setRemoteFailure(e);

                    if (e.errorCode == ICameraService.ERROR_DISABLED ||
                            e.errorCode == ICameraService.ERROR_DISCONNECTED ||
                            e.errorCode == ICameraService.ERROR_CAMERA_IN_USE) {
                        // Per API docs, these failures call onError and throw
                        throwAsPublicException(e);
                    }
                } else {
                    // Unexpected failure - rethrow
                    throwAsPublicException(e);
                }
            } catch (RemoteException e) {
                // Camera service died - act as if it's a CAMERA_DISCONNECTED case
                ServiceSpecificException sse = new ServiceSpecificException(
                    ICameraService.ERROR_DISCONNECTED,
                    "Camera service is currently unavailable");
                deviceImpl.setRemoteFailure(sse);
                throwAsPublicException(sse);
            }
  • deviceImpl.setRemoteFailure(e);是執行onError回調的函數。

3.2.4 小結

  • Camera的知識點非常多,這兩節主要是見微知著,從openCamera講起,從頂層到底層的瀏覽一遍整個框架,由淺入深的學習camera知識。下面總結的是連接成功之后,StateCallback-->onOpened回調中camera會如何操作。

3.3 Android Camera原理之createCaptureSession模塊

  • openCamera成功之后就會執行CameraDeviceImpl-->createCaptureSession,執行成功的回調CameraDevice.StateCallback的onOpened(CameraDevice cameraDevice)方法,當前這個CameraDevice參數就是當前已經打開的相機設備。
  • 獲取了相機設備,接下來要創建捕捉會話,會話建立成功,可以在當前會話的基礎上設置相機預覽界面,這時候我們調整攝像頭,就能看到屏幕上渲染的相機輸入流了,接下來我們可以操作拍照片、拍視頻等操作。createCaptureSession流程
  • 上圖是createCaptureSession執行流程,涉及到的代碼模塊流程非常復雜,這兒只是提供了核心的一些流程。接下來我們會從代碼結構和代碼功能的基礎上來講述這一塊的內容。

3.3.1 CameraDeviceImpl->createCaptureSession

    public void createCaptureSession(List<Surface> outputs,
            CameraCaptureSession.StateCallback callback, Handler handler)
            throws CameraAccessException {
        List<OutputConfiguration> outConfigurations = new ArrayList<>(outputs.size());
        for (Surface surface : outputs) {
            outConfigurations.add(new OutputConfiguration(surface));
        }
        createCaptureSessionInternal(null, outConfigurations, callback,
                checkAndWrapHandler(handler), /*operatingMode*/ICameraDeviceUser.NORMAL_MODE,
                /*sessionParams*/ null);
    }
  • 將Surface轉化為OutputConfiguration,OutputConfiguration是一個描述camera輸出數據的類,其中包括Surface和捕獲camera會話的特定設置。從其類的定義來看,它是一個實現Parcelable的類,說明其必定要跨進程傳輸。
  • CameraDeviceImpl->createCaptureSession傳入的Surface列表:
    • 這兒的一個Surface表示輸出流,Surface表示有多個輸出流,我們有幾個顯示載體,就需要幾個輸出流。
    • 對於拍照而言,有兩個輸出流:一個用於預覽、一個用於拍照。
    • 對於錄制視頻而言,有兩個輸出流:一個用於預覽、一個用於錄制視頻。
  • 下面是拍照的時候執行的代碼:第一個surface是用於預覽的,第二個surface,由於是拍照,所以使用ImageReader對象來獲取捕獲的圖片,ImageReader在構造函數的時候調用nativeGetSurface獲取Surface,這個Surface作為拍照的Surface來使用。
            mCameraDevice.createCaptureSession(Arrays.asList(surface, mImageReader.getSurface()),
                    new CameraCaptureSession.StateCallback() {

                        @Override
                        public void onConfigured(@NonNull CameraCaptureSession cameraCaptureSession) {ss
                        }

                        @Override
                        public void onConfigureFailed(
                                @NonNull CameraCaptureSession cameraCaptureSession) {
                        }
                    }, null
            );
  • 視頻錄制的也是一樣的道理,視頻錄制使用MediaRecorder來獲取視頻信息,MediaRecorder在構造的時候也調用nativeGetSurface獲取Surface。
  • ImageReader->OnImageAvailableListener回調:可以讀取Surface對象的圖片數據,將其轉化為本地可以識別的數據,圖片的長寬、時間信息等等。這些image數據信息的采集后續會詳細說明,這兒先一筆帶過。
    private class SurfaceImage extends android.media.Image {
        public SurfaceImage(int format) {
            mFormat = format;
        }
        @Override
        public void close() {
            ImageReader.this.releaseImage(this);
        }
        public ImageReader getReader() {
            return ImageReader.this;
        }
        private class SurfacePlane extends android.media.Image.Plane {
            private SurfacePlane(int rowStride, int pixelStride, ByteBuffer buffer) {
//......
            }
            final private int mPixelStride;
            final private int mRowStride;
            private ByteBuffer mBuffer;
        }
        private long mNativeBuffer;
        private long mTimestamp;
        private int mTransform;
        private int mScalingMode;
        private SurfacePlane[] mPlanes;
        private int mFormat = ImageFormat.UNKNOWN;
        // If this image is detached from the ImageReader.
        private AtomicBoolean mIsDetached = new AtomicBoolean(false);
        private synchronized native SurfacePlane[] nativeCreatePlanes(int numPlanes,
                int readerFormat);
        private synchronized native int nativeGetWidth();
        private synchronized native int nativeGetHeight();
        private synchronized native int nativeGetFormat(int readerFormat);
        private synchronized native HardwareBuffer nativeGetHardwareBuffer();
    }
  • 在准備拍照之前,還會設置一下ImageReader的OnImageAvailableListener回調接口,調用setOnImageAvailableListener(OnImageAvailableListener listener, Handler handler)設置當前的OnImageAvailableListener 對象。這兒回調接口只有一個onImageAvailable函數,表示當前的捕捉的image已經可用了。然后我們在onImageAvailable回調函數中操作當前捕獲的圖片。
    public interface OnImageAvailableListener {
        void onImageAvailable(ImageReader reader);
    }

3.3.2 CameraDeviceImpl->createCaptureSessionInternal

    private void createCaptureSessionInternal(InputConfiguration inputConfig,
            List<OutputConfiguration> outputConfigurations,
            CameraCaptureSession.StateCallback callback, Executor executor,
            int operatingMode, CaptureRequest sessionParams)
  • 傳入的幾個參數中,inputConfig為null,我們只需關注outputConfigurations即可。
    createCaptureSessionInternal函數中代碼很多,但是重要的就是執行配置Surface
                configureSuccess = configureStreamsChecked(inputConfig, outputConfigurations,
                        operatingMode, sessionParams);
                if (configureSuccess == true && inputConfig != null) {
                    input = mRemoteDevice.getInputSurface();
                }
  • 如果配置surface成功,返回一個Input surface這個input surface是用戶本地設置的一個輸入流。接下來這個input對象會在構造CameraCaptureSessionImpl對象時被傳入。 具體參考3.3.4.CameraCaptureSessionImpl構造函數

3.3.3 CameraDeviceImpl->configureStreamsChecked

  • 下面這張圖詳細列出配置輸入輸出流函數中執行的主要步驟,由於當前的inputConfig為null,所以核心的執行就是下面粉紅色框中的過程——創建輸出流configureStreamsChecked流程對比
  • mRemoteDevice.beginConfigure();mRemoteDevice.endConfigure(operatingMode, null);中間的過程是IPC通知service端告知當前正在處理輸入輸出流。執行完mRemoteDevice.endConfigure(operatingMode, null);返回success = true;如果中間被終端了,那么success肯定不為true。
3.3.3.1 檢查輸入流
  • checkInputConfiguration(inputConfig);
    當前inputConfig為null,所以這部分不執行。
3.3.3.2 檢查輸出流
  • 檢查當前緩存的輸出流數據列表,如果當前的輸出流信息已經在列表中,則不必要重新創建流,如果沒有則需要創建流。
            // Streams to create
            HashSet<OutputConfiguration> addSet = new HashSet<OutputConfiguration>(outputs);
            // Streams to delete
            List<Integer> deleteList = new ArrayList<Integer>();
            for (int i = 0; i < mConfiguredOutputs.size(); ++i) {
                int streamId = mConfiguredOutputs.keyAt(i);
                OutputConfiguration outConfig = mConfiguredOutputs.valueAt(i);
                if (!outputs.contains(outConfig) || outConfig.isDeferredConfiguration()) {
                    deleteList.add(streamId);
                } else {
                    addSet.remove(outConfig);  // Don't create a stream previously created
                }
            }
  • private final SparseArray<OutputConfiguration> mConfiguredOutputs = new SparseArray<>();
  • mConfiguredOutputs是內存中的輸出流緩存列表,每次創建輸出流都會把streamId和輸出流緩存在這個SparseArray中。
  • 這個部分代碼操作完成之后:
    • addSet就是要即將要創建輸出流的集合列表。
    • deleteList就是即將要刪除的streamId列表,保證當前mConfiguredOutputs列表中的輸出流數據是最新可用的。
  • 下面是刪除過期輸出流的地方:
                // Delete all streams first (to free up HW resources)
                for (Integer streamId : deleteList) {
                    mRemoteDevice.deleteStream(streamId);
                    mConfiguredOutputs.delete(streamId);
                }
  • 下面是創建輸出流的地方:
                // Add all new streams
                for (OutputConfiguration outConfig : outputs) {
                    if (addSet.contains(outConfig)) {
                        int streamId = mRemoteDevice.createStream(outConfig);
                        mConfiguredOutputs.put(streamId, outConfig);
                    }
                }
3.3.3.3 mRemoteDevice.createStream(outConfig)
  • 這個IPC調用直接調用到CameraDeviceClient.h中的virtual binder::Status createStream( const hardware::camera2::params::OutputConfiguration &outputConfiguration, /*out*/ int32_t* newStreamId = NULL) override;
  • 其實第一個參數outputConfiguration表示輸出surface,第2個參數是out屬性的,表示IPC執行之后返回的參數。該方法中主要就是下面的這段代碼了。
    const std::vector<sp<IGraphicBufferProducer>>& bufferProducers =
            outputConfiguration.getGraphicBufferProducers();
    size_t numBufferProducers = bufferProducers.size();
//......
    for (auto& bufferProducer : bufferProducers) {
//......
        sp<Surface> surface;
        res = createSurfaceFromGbp(streamInfo, isStreamInfoValid, surface, bufferProducer,
                physicalCameraId);
//......
        surfaces.push_back(surface);
    }
//......
    int streamId = camera3::CAMERA3_STREAM_ID_INVALID;
    std::vector<int> surfaceIds;
    err = mDevice->createStream(surfaces, deferredConsumer, streamInfo.width,
            streamInfo.height, streamInfo.format, streamInfo.dataSpace,
            static_cast<camera3_stream_rotation_t>(outputConfiguration.getRotation()),
            &streamId, physicalCameraId, &surfaceIds, outputConfiguration.getSurfaceSetID(),
            isShared);
  • for循環中使用outputConfiguration.getGraphicBufferProducers()得到的GraphicBufferProducers創建出對應的surface,同時會對這些surface對象進行判斷,檢查它們的合法性,合法的話就會將它們加入到surfaces集合中,然后調用mDevice->createStream進一步執行流的創建.

  • 這里就要說一說Android顯示系統的一些知識了,大家要清楚,Android上最終繪制在屏幕上的buffer都是在顯存中分配的,而除了這部分外,其他都是在內存中分配的,buffer管理的模塊有兩個,一個是framebuffer,一個是gralloc,framebuffer用來將渲染好的buffer顯示到屏幕上,而gralloc用於分配buffer,我們相機預覽的buffer輪轉也不例外,它所申請的buffer根上也是由gralloc來分配的,在native層的描述是一個private_handle_t指針,而中間會經過多層的封裝,這些buffer都是共享的。

  • 只不過它的生命周期中的某個時刻只能屬於一個所有者,而這些所有者的角色在不斷的變換,這也就是Android中最經典的生產者--消費者的循環模型了,生產者就是BufferProducer,消費者就是BufferConsumer,每一個buffer在它的生命周期過程中轉換時都會被鎖住,這樣它的所有者角色發生變化,而其他對象想要修改它就不可能了,這樣就保證了buffer同步。

  • 參考:Android 8.0系統源碼分析--相機createCaptureSession創建過程源碼分析

3.3.3.4 mDevice->createStream
  • 首先將上一步傳入的surface,也就是以后的consumer加入到隊列中,然后調用重載的createStream方法進一步處理。這里的參數width就表示我們要配置的surface的寬度,height表示高度,format表示格式,這個format格式是根據surface查詢ANativeWindow獲取的——anw->query(anw, NATIVE_WINDOW_FORMAT, &format)我們前面已經說過,dataSpace的類型為android_dataspace,它表示我們buffer輪轉時,buffer的大小,接下來定義一個Camera3OutputStream局部變量,這個也就是我們說的配置流了,接下來的if/else判斷會根據我們的意圖,創建不同的流對象,比如我們要配置拍照流,它的format格式為HAL_PIXEL_FORMAT_BLOB,所以就執行第一個if分支,創建一個Camera3OutputStream,創建完成后,執行 * id = mNextStreamId++,給id指針賦值,這也就是當前流的id了,所以它是遞增的。一般情況下,mStatus的狀態在initializeCommonLocked()初始化通過調用internalUpdateStatusLocked方法被賦值為STATUS_UNCONFIGURED狀態,所以這里的switch/case分支中就進入case STATUS_UNCONFIGURED,然后直接break跳出了,所以局部變量wasActive的值為false,最后直接返回OK。
  • 到這里,createStream的邏輯就執行完成了,還是要提醒大家,createStream的邏輯是在framework中的for循環里執行的,我們的創建相當於只配置了一個surface,如果有多個surface的話,這里會執行多次,相應的Camera3OutputStream流的日志也會打印多次,對於進行定位的問題也非常有幫助。
3.3.3.5 mRemoteDevice.endConfigure
binder::Status CameraDeviceClient::endConfigure(int operatingMode,
        const hardware::camera2::impl::CameraMetadataNative& sessionParams) {
//......
    status_t err = mDevice->configureStreams(sessionParams, operatingMode);
//......
}
  • 這里傳入的第二個參與一般為null,調用到
    Camera3Device::configureStreams
    --->Camera3Device::filterParamsAndConfigureLocked
    --->Camera3Device::configureStreamsLocked
    Camera3Device::configureStreamsLocked中會直接調用HAL層的配置流方法:res = mInterface->configureStreams(sessionBuffer, &config, bufferSizes);
  • 完成輸入流的配置:
    if (mInputStream != NULL && mInputStream->isConfiguring()) {
        res = mInputStream->finishConfiguration();
        if (res != OK) {
            CLOGE("Can't finish configuring input stream %d: %s (%d)",
                    mInputStream->getId(), strerror(-res), res);
            cancelStreamsConfigurationLocked();
            return BAD_VALUE;
        }
    }
  • 完成輸出流的配置:
    for (size_t i = 0; i < mOutputStreams.size(); i++) {
        sp<Camera3OutputStreamInterface> outputStream =
            mOutputStreams.editValueAt(i);
        if (outputStream->isConfiguring() && !outputStream->isConsumerConfigurationDeferred()) {
            res = outputStream->finishConfiguration();
            if (res != OK) {
                CLOGE("Can't finish configuring output stream %d: %s (%d)",
                        outputStream->getId(), strerror(-res), res);
                cancelStreamsConfigurationLocked();
                return BAD_VALUE;
            }
        }
    }
  outputStream->finishConfiguration()`
   --->`Camera3Stream::finishConfiguration`
   --->`Camera3OutputStream::configureQueueLocked`
   --->`Camera3OutputStream::configureConsumerQueueLocked
  • 關於一下Camera3OutputStream::configureConsumerQueueLocked核心的執行步驟:
    // Configure consumer-side ANativeWindow interface. The listener may be used
    // to notify buffer manager (if it is used) of the returned buffers.
    res = mConsumer->connect(NATIVE_WINDOW_API_CAMERA,
            /*listener*/mBufferReleasedListener,
            /*reportBufferRemoval*/true);
    if (res != OK) {
        ALOGE("%s: Unable to connect to native window for stream %d",
                __FUNCTION__, mId);
        return res;
    }
  • mConsumer就是配置時創建的surface,我們連接mConsumer,然后分配需要的空間大小。下面申請底層的ANativeWindow窗口,這是一個OpenCL 的窗口。對應的Android設備上一般是兩種:Surface和SurfaceFlinger
    int maxConsumerBuffers;
    res = static_cast<ANativeWindow*>(mConsumer.get())->query(
            mConsumer.get(),
            NATIVE_WINDOW_MIN_UNDEQUEUED_BUFFERS, &maxConsumerBuffers);
    if (res != OK) {
        ALOGE("%s: Unable to query consumer undequeued"
                " buffer count for stream %d", __FUNCTION__, mId);
        return res;
    }
  • 至此,CameraDeviceImpl->configureStreamsChecked分析完成,接下里我們需要根據配置stream結果來創建CameraCaptureSession

3.3.4 CameraCaptureSessionImpl構造函數

            try {
                // configure streams and then block until IDLE
                configureSuccess = configureStreamsChecked(inputConfig, outputConfigurations,
                        operatingMode, sessionParams);
                if (configureSuccess == true && inputConfig != null) {
                    input = mRemoteDevice.getInputSurface();
                }
            } catch (CameraAccessException e) {
                configureSuccess = false;
                pendingException = e;
                input = null;
                if (DEBUG) {
                    Log.v(TAG, "createCaptureSession - failed with exception ", e);
                }
            }
  • 配置流完成之后,返回configureSuccess表示當前配置是否成功。
    然后創建CameraCaptureSessionImpl的時候要用到:
            CameraCaptureSessionCore newSession = null;
            if (isConstrainedHighSpeed) {
                ArrayList<Surface> surfaces = new ArrayList<>(outputConfigurations.size());
                for (OutputConfiguration outConfig : outputConfigurations) {
                    surfaces.add(outConfig.getSurface());
                }
                StreamConfigurationMap config =
                    getCharacteristics().get(CameraCharacteristics.SCALER_STREAM_CONFIGURATION_MAP);
                SurfaceUtils.checkConstrainedHighSpeedSurfaces(surfaces, /*fpsRange*/null, config);

                newSession = new CameraConstrainedHighSpeedCaptureSessionImpl(mNextSessionId++,
                        callback, executor, this, mDeviceExecutor, configureSuccess,
                        mCharacteristics);
            } else {
                newSession = new CameraCaptureSessionImpl(mNextSessionId++, input,
                        callback, executor, this, mDeviceExecutor, configureSuccess);
            }

            // TODO: wait until current session closes, then create the new session
            mCurrentSession = newSession;

            if (pendingException != null) {
                throw pendingException;
            }

            mSessionStateCallback = mCurrentSession.getDeviceStateCallback();
  • 一般執行CameraCaptureSessionImpl構造函數。
    CameraCaptureSessionImpl(int id, Surface input,
            CameraCaptureSession.StateCallback callback, Executor stateExecutor,
            android.hardware.camera2.impl.CameraDeviceImpl deviceImpl,
            Executor deviceStateExecutor, boolean configureSuccess) {
        if (callback == null) {
            throw new IllegalArgumentException("callback must not be null");
        }

        mId = id;
        mIdString = String.format("Session %d: ", mId);

        mInput = input;
        mStateExecutor = checkNotNull(stateExecutor, "stateExecutor must not be null");
        mStateCallback = createUserStateCallbackProxy(mStateExecutor, callback);

        mDeviceExecutor = checkNotNull(deviceStateExecutor,
                "deviceStateExecutor must not be null");
        mDeviceImpl = checkNotNull(deviceImpl, "deviceImpl must not be null");
        mSequenceDrainer = new TaskDrainer<>(mDeviceExecutor, new SequenceDrainListener(),
                /*name*/"seq");
        mIdleDrainer = new TaskSingleDrainer(mDeviceExecutor, new IdleDrainListener(),
                /*name*/"idle");
        mAbortDrainer = new TaskSingleDrainer(mDeviceExecutor, new AbortDrainListener(),
                /*name*/"abort");
        if (configureSuccess) {
            mStateCallback.onConfigured(this);
            if (DEBUG) Log.v(TAG, mIdString + "Created session successfully");
            mConfigureSuccess = true;
        } else {
            mStateCallback.onConfigureFailed(this);
            mClosed = true; // do not fire any other callbacks, do not allow any other work
            Log.e(TAG, mIdString + "Failed to create capture session; configuration failed");
            mConfigureSuccess = false;
        }
    }
  • 構造函數執行的最后可以看到,當前configureSuccess=true,執行mStateCallback.onConfigureFailed(this),如果失敗,執行mStateCallback.onConfigureFailed(this)回調。

3.3.5 小結

  • createCaptureSession的過程就分析完了,它是我們相機預覽最重要的條件,一般session創建成功,那么我們的預覽就會正常,session創建失敗,則預覽一定黑屏,如果有碰到相機黑屏的問題,最大的疑點就是這里,session創建完成后,framework會通過CameraCaptureSession.StateCallback類的public abstract void onConfigured(@NonNull CameraCaptureSession session)回調到應用層,通知我們session創建成功了,那么我們就可以使用回調方法中的CameraCaptureSession參數,調用它的setRepeatingRequest方法來下預覽了,該邏輯執行完成后,相機的預覽就起來了。

3.4 Android Camera原理之setRepeatingRequest與capture模塊

  • 在createCaptureSession之后,Camera 會話就已經創建成功,接下來就開始進行預覽。預覽回調onCaptureCompleted之后就可以拍照(回調到onCaptureCompleted,說明capture 完整frame數據已經返回了,可以捕捉其中的數據了。),由於預覽和拍照的很多流程很相似,拍照只是預覽過程中的一個節點,所以我們把預覽和拍照放在該節中里講解。

3.4.1 預覽

  • 預覽發起的函數就是CameraCaptureSession-->setRepeatingRequest,本節就談一下Camera 是如何發起預覽操作的。
  • CameraCaptureSession-->setRepeatingRequestcreateCaptureSession(List<Surface> outputs, CameraCaptureSession.StateCallback callback, Handler handler)中輸出流配置成功之后執行CameraCaptureSession.StateCallback.onConfigured(@NonNull CameraCaptureSession session)函數中執行的。
            mCameraDevice.createCaptureSession(Arrays.asList(surface, mImageReader.getSurface()),
                    new CameraCaptureSession.StateCallback() {

                        @Override
                        public void onConfigured(@NonNull CameraCaptureSession cameraCaptureSession) {
                            // The camera is already closed
                            if (null == mCameraDevice) {
                                return;
                            }

                            // When the session is ready, we start displaying the preview.
                            mCaptureSession = cameraCaptureSession;
                            try {
                                // Auto focus should be continuous for camera preview.
                                mPreviewRequestBuilder.set(CaptureRequest.CONTROL_AF_MODE,
                                        CaptureRequest.CONTROL_AF_MODE_CONTINUOUS_PICTURE);
                                // Flash is automatically enabled when necessary.
                                setAutoFlash(mPreviewRequestBuilder);

                                // Finally, we start displaying the camera preview.
                                mPreviewRequest = mPreviewRequestBuilder.build();
                                mCaptureSession.setRepeatingRequest(mPreviewRequest,
                                        mCaptureCallback, mBackgroundHandler);
                            } catch (CameraAccessException e) {
                                e.printStackTrace();
                            }
                        }

                        @Override
                        public void onConfigureFailed(
                                @NonNull CameraCaptureSession cameraCaptureSession) {
                            showToast("Failed");
                        }
                    }, null
            );
  • 最終執行了 mCaptureSession.setRepeatingRequest(mPreviewRequest, mCaptureCallback, mBackgroundHandler);
    來執行camera preview操作。像對焦等操作就可以在這個onConfigured回調中完成。
    • onConfigured回調表示當前的配置流已經完成,相機已經顯示出來了,可以預覽了。
    • onConfigureFailed配置流失敗,相機黑屏。
    public int setRepeatingRequest(CaptureRequest request, CaptureCallback callback,
            Handler handler) throws CameraAccessException {
        checkRepeatingRequest(request);

        synchronized (mDeviceImpl.mInterfaceLock) {
            checkNotClosed();

            handler = checkHandler(handler, callback);

            return addPendingSequence(mDeviceImpl.setRepeatingRequest(request,
                    createCaptureCallbackProxy(handler, callback), mDeviceExecutor));
        }
    }
  • 第一個參數CaptureRequest 標識當前capture 請求的屬性,是請求一個camera還是多個camera,是否復用之前的請求等等。
  • 第二個參數CaptureCallback 是捕捉回調,這是開發者直接接觸的回調。
    public interface CaptureCallback {
        public static final int NO_FRAMES_CAPTURED = -1;
        public void onCaptureStarted(CameraDevice camera,
                CaptureRequest request, long timestamp, long frameNumber);
        public void onCapturePartial(CameraDevice camera,
                CaptureRequest request, CaptureResult result);
        public void onCaptureProgressed(CameraDevice camera,
                CaptureRequest request, CaptureResult partialResult);
        public void onCaptureCompleted(CameraDevice camera,
                CaptureRequest request, TotalCaptureResult result);
        public void onCaptureFailed(CameraDevice camera,
                CaptureRequest request, CaptureFailure failure);
        public void onCaptureSequenceCompleted(CameraDevice camera,
                int sequenceId, long frameNumber);
        public void onCaptureSequenceAborted(CameraDevice camera,
                int sequenceId);
        public void onCaptureBufferLost(CameraDevice camera,
                CaptureRequest request, Surface target, long frameNumber);
    }
  • 這需要開發者自己實現,這些回調是如何調用到上層的,后續補充CameraDeviceCallbacks回調模塊,這都是通過CameraDeviceCallbacks回調調上來的。

  • 下面我們從camera 調用原理的角度分析一下mCaptureSession.setRepeatingRequest --->CameraDeviceImpl.setRepeatingRequest --->CameraDeviceImpl.submitCaptureRequest
    其中CameraDeviceImpl.setRepeatingRequest中第3個參數傳入的是true。之所以這個強調一點,因為接下來執行CameraDeviceImpl.capture的時候也會執行setRepeatingRequest,這里第3個參數傳入的就是false。第3個參數boolean repeating如果為true,表示當前捕獲的是一個過程,camera frame不斷在填充;如果為false,表示當前捕獲的是一個瞬間,就是拍照。

    public int setRepeatingRequest(CaptureRequest request, CaptureCallback callback,
            Executor executor) throws CameraAccessException {
        List<CaptureRequest> requestList = new ArrayList<CaptureRequest>();
        requestList.add(request);
        return submitCaptureRequest(requestList, callback, executor, /*streaming*/true);
    }
    private int submitCaptureRequest(List<CaptureRequest> requestList, CaptureCallback callback,
            Executor executor, boolean repeating)  {
//......
    }
  • CameraDeviceImpl.submitCaptureRequest核心工作就是3步:
    • 驗證當前CaptureRequest列表中的request是否合理:核心就是驗證與request綁定的Surface是否存在。
    • 向底層發送請求信息。
    • 將底層返回的請求信息和傳入的CaptureCallback 綁定,以便后續正確回調。
    • 而這三步中,第二步卻是核心工作。
3.4.1.1 向底層發送captureRequest請求
            SubmitInfo requestInfo;

            CaptureRequest[] requestArray = requestList.toArray(new CaptureRequest[requestList.size()]);
            // Convert Surface to streamIdx and surfaceIdx
            for (CaptureRequest request : requestArray) {
                request.convertSurfaceToStreamId(mConfiguredOutputs);
            }

            requestInfo = mRemoteDevice.submitRequestList(requestArray, repeating);
            if (DEBUG) {
                Log.v(TAG, "last frame number " + requestInfo.getLastFrameNumber());
            }

            for (CaptureRequest request : requestArray) {
                request.recoverStreamIdToSurface();
            }
  • 執行request.convertSurfaceToStreamId(mConfiguredOutputs);將本地已經緩存的surface和stream記錄在內存中,並binder傳輸到camera service層中,防止camera service端重復請求。

  • requestInfo = mRemoteDevice.submitRequestList(requestArray, repeating);這兒直接調用到camera service端。這兒需要重點講解一下的。

  • request.recoverStreamIdToSurface();回調成功,清除之前在內存中的數據。

  • CameraDeviceClient::submitRequest--->CameraDeviceClient::submitRequestList:

    • 這個函數代碼很多,前面很多執行都是在復用檢索之前的緩存是否可用,我們關注一下核心的執行:預覽的情況下傳入的streaming是true,執行上面;如果是拍照的話,那就執行下面的else。err = mDevice->setStreamingRequestList(metadataRequestList, surfaceMapList, &(submitInfo->mLastFrameNumber));
    • 傳入的submitInfo就是要返回上層的回調參數,如果是預覽狀態,需要不斷更新當前的的frame數據,所以每次更新最新的frame number。
    if (streaming) {
        err = mDevice->setStreamingRequestList(metadataRequestList, surfaceMapList,
                &(submitInfo->mLastFrameNumber));
        if (err != OK) {
            String8 msg = String8::format(
                "Camera %s:  Got error %s (%d) after trying to set streaming request",
                mCameraIdStr.string(), strerror(-err), err);
            ALOGE("%s: %s", __FUNCTION__, msg.string());
            res = STATUS_ERROR(CameraService::ERROR_INVALID_OPERATION,
                    msg.string());
        } else {
            Mutex::Autolock idLock(mStreamingRequestIdLock);
            mStreamingRequestId = submitInfo->mRequestId;
        }
    } else {
        err = mDevice->captureList(metadataRequestList, surfaceMapList,
                &(submitInfo->mLastFrameNumber));
        if (err != OK) {
            String8 msg = String8::format(
                "Camera %s: Got error %s (%d) after trying to submit capture request",
                mCameraIdStr.string(), strerror(-err), err);
            ALOGE("%s: %s", __FUNCTION__, msg.string());
            res = STATUS_ERROR(CameraService::ERROR_INVALID_OPERATION,
                    msg.string());
        }
        ALOGV("%s: requestId = %d ", __FUNCTION__, submitInfo->mRequestId);
    }
  • Camera3Device::setStreamingRequestList--->Camera3Device::submitRequestsHelper:
status_t Camera3Device::submitRequestsHelper(
        const List<const PhysicalCameraSettingsList> &requests,
        const std::list<const SurfaceMap> &surfaceMaps,
        bool repeating,
        /*out*/
        int64_t *lastFrameNumber) {
    ATRACE_CALL();
    Mutex::Autolock il(mInterfaceLock);
    Mutex::Autolock l(mLock);

    status_t res = checkStatusOkToCaptureLocked();
    if (res != OK) {
        // error logged by previous call
        return res;
    }

    RequestList requestList;

    res = convertMetadataListToRequestListLocked(requests, surfaceMaps,
            repeating, /*out*/&requestList);
    if (res != OK) {
        // error logged by previous call
        return res;
    }

    if (repeating) {
        res = mRequestThread->setRepeatingRequests(requestList, lastFrameNumber);
    } else {
        res = mRequestThread->queueRequestList(requestList, lastFrameNumber);
    }
//......
    return res;
}
  • 預覽的時候會執行mRequestThread->setRepeatingRequests(requestList, lastFrameNumber);
    拍照的時候執行mRequestThread->queueRequestList(requestList, lastFrameNumber);

  • mRequestThread->setRepeatingRequests:

status_t Camera3Device::RequestThread::setRepeatingRequests(
        const RequestList &requests,
        /*out*/
        int64_t *lastFrameNumber) {
    ATRACE_CALL();
    Mutex::Autolock l(mRequestLock);
    if (lastFrameNumber != NULL) {
        *lastFrameNumber = mRepeatingLastFrameNumber;
    }
    mRepeatingRequests.clear();
    mRepeatingRequests.insert(mRepeatingRequests.begin(),
            requests.begin(), requests.end());

    unpauseForNewRequests();

    mRepeatingLastFrameNumber = hardware::camera2::ICameraDeviceUser::NO_IN_FLIGHT_REPEATING_FRAMES;
    return OK;
}
  • 將當前提交的CaptureRequest請求放入之前的預覽請求隊列中,告知HAL層有新的request請求,HAL層連接請求開始工作,源源不斷地輸出信息到上層。這兒是跑在Camera3Device中定義的RequestThread線程中,可以保證在預覽的時候不斷地捕獲信息流,camera就不斷處於預覽的狀態了。
3.4.1.2 將返回請求信息和 CaptureCallback 綁定
            if (callback != null) {
                mCaptureCallbackMap.put(requestInfo.getRequestId(),
                        new CaptureCallbackHolder(
                            callback, requestList, executor, repeating, mNextSessionId - 1));
            } else {
                if (DEBUG) {
                    Log.d(TAG, "Listen for request " + requestInfo.getRequestId() + " is null");
                }
            }
    /** map request IDs to callback/request data */
    private final SparseArray<CaptureCallbackHolder> mCaptureCallbackMap =
            new SparseArray<CaptureCallbackHolder>();
  • 向底層發送captureRequest請求:--->回調的requestIinfo表示當前capture request的結果,將requestInfo.getRequestId()與CaptureCallbackHolder綁定,因為Camera 2架構支持發送多次CaptureRequest請求,如果不使用這種綁定機制,后續的回調會造成嚴重的錯亂,甚至回調不上來,那么開發者無法繼續使用了。

  • 我們看看使用這些回調的地方的代碼:CameraDeviceCallbacks.aidl是camera service進程與用戶進程通信的回調,到這個回調里面,再取出CaptureRequest綁定的CaptureCallback回調,調用到CaptureCallback回調函數,這樣開發者可以直接使用。

  • 下面是CameraDeviceCallbacks的onCaptureStarted回調---->

        public void onCaptureStarted(final CaptureResultExtras resultExtras, final long timestamp) {
            int requestId = resultExtras.getRequestId();
            final long frameNumber = resultExtras.getFrameNumber();

            if (DEBUG) {
                Log.d(TAG, "Capture started for id " + requestId + " frame number " + frameNumber);
            }
            final CaptureCallbackHolder holder;

            synchronized(mInterfaceLock) {
                if (mRemoteDevice == null) return; // Camera already closed

                // Get the callback for this frame ID, if there is one
                holder = CameraDeviceImpl.this.mCaptureCallbackMap.get(requestId);

                if (holder == null) {
                    return;
                }

                if (isClosed()) return;

                // Dispatch capture start notice
                final long ident = Binder.clearCallingIdentity();
                try {
                    holder.getExecutor().execute(
                        new Runnable() {
                            @Override
                            public void run() {
                                if (!CameraDeviceImpl.this.isClosed()) {
                                    final int subsequenceId = resultExtras.getSubsequenceId();
                                    final CaptureRequest request = holder.getRequest(subsequenceId);

                                    if (holder.hasBatchedOutputs()) {
                                        // Send derived onCaptureStarted for requests within the
                                        // batch
                                        final Range<Integer> fpsRange =
                                            request.get(CaptureRequest.CONTROL_AE_TARGET_FPS_RANGE);
                                        for (int i = 0; i < holder.getRequestCount(); i++) {
                                            holder.getCallback().onCaptureStarted(
                                                CameraDeviceImpl.this,
                                                holder.getRequest(i),
                                                timestamp - (subsequenceId - i) *
                                                NANO_PER_SECOND/fpsRange.getUpper(),
                                                frameNumber - (subsequenceId - i));
                                        }
                                    } else {
                                        holder.getCallback().onCaptureStarted(
                                            CameraDeviceImpl.this,
                                            holder.getRequest(resultExtras.getSubsequenceId()),
                                            timestamp, frameNumber);
                                    }
                                }
                            }
                        });
                } finally {
                    Binder.restoreCallingIdentity(ident);
                }
            }
        }
  • holder = CameraDeviceImpl.this.mCaptureCallbackMap.get(requestId);然后直接調用
holder.getCallback().onCaptureStarted(
                                                CameraDeviceImpl.this,
                                                holder.getRequest(i),
                                                timestamp - (subsequenceId - i) *
                                                NANO_PER_SECOND/fpsRange.getUpper(),
                                                frameNumber - (subsequenceId - i));

3.4.2 拍照

  • 開發者如果想要拍照的話,直接調用mCaptureSession.capture(mPreviewRequestBuilder.build(), mCaptureCallback, mBackgroundHandler); - 拍照的調用流程和預覽很相似,只是在調用函數中個傳入的參數不同。
    public int capture(CaptureRequest request, CaptureCallback callback, Executor executor)
            throws CameraAccessException {
        if (DEBUG) {
            Log.d(TAG, "calling capture");
        }
        List<CaptureRequest> requestList = new ArrayList<CaptureRequest>();
        requestList.add(request);
        return submitCaptureRequest(requestList, callback, executor, /*streaming*/false);
    }
  • 拍照的時候也是調用submitCaptureRequest,只不過第3個參數傳入的是false,表示不用循環獲取HAL調用上來的幀數據,只獲取瞬間的幀數據就可以。

  • 拍照和預覽調用的區分在:CameraDeviceClient::submitRequestList

    if (streaming) {
//......
    } else {
        err = mDevice->captureList(metadataRequestList, surfaceMapList,
                &(submitInfo->mLastFrameNumber));
        if (err != OK) {
            String8 msg = String8::format(
                "Camera %s: Got error %s (%d) after trying to submit capture request",
                mCameraIdStr.string(), strerror(-err), err);
            ALOGE("%s: %s", __FUNCTION__, msg.string());
            res = STATUS_ERROR(CameraService::ERROR_INVALID_OPERATION,
                    msg.string());
        }
        ALOGV("%s: requestId = %d ", __FUNCTION__, submitInfo->mRequestId);
    }
  • 接下里調用到mDevice->captureList--->Camera3Device::submitRequestsHelper:
status_t Camera3Device::submitRequestsHelper(
        const List<const PhysicalCameraSettingsList> &requests,
        const std::list<const SurfaceMap> &surfaceMaps,
        bool repeating,
        /*out*/
        int64_t *lastFrameNumber) {
//......
    RequestList requestList;
//......
    if (repeating) {
        res = mRequestThread->setRepeatingRequests(requestList, lastFrameNumber);
    } else {
        res = mRequestThread->queueRequestList(requestList, lastFrameNumber);
    }
//......
    return res;
}
  • 執行Camera3Device::RequestThread線程中的queueRequestList。
status_t Camera3Device::RequestThread::queueRequestList(
        List<sp<CaptureRequest> > &requests,
        /*out*/
        int64_t *lastFrameNumber) {
    ATRACE_CALL();
    Mutex::Autolock l(mRequestLock);
    for (List<sp<CaptureRequest> >::iterator it = requests.begin(); it != requests.end();
            ++it) {
        mRequestQueue.push_back(*it);
    }

    if (lastFrameNumber != NULL) {
        *lastFrameNumber = mFrameNumber + mRequestQueue.size() - 1;
        ALOGV("%s: requestId %d, mFrameNumber %" PRId32 ", lastFrameNumber %" PRId64 ".",
              __FUNCTION__, (*(requests.begin()))->mResultExtras.requestId, mFrameNumber,
              *lastFrameNumber);
    }

    unpauseForNewRequests();

    return OK;
}
  • *lastFrameNumber = mFrameNumber + mRequestQueue.size() - 1; - 這里有關鍵的執行代碼,表示當前取最新的capture frame數據。

  • 拍照的時候在什么地方捕捉image:

    • camera1的時候提供了PictureCallback回調方式來提供實時預覽回調,可以在這里獲取image數據回調。
    • camera2沒有這個接口,但是提供了ImageReader.OnImageAvailableListener來實現回調。
    public interface OnImageAvailableListener {
        /**
         * Callback that is called when a new image is available from ImageReader.
         *
         * @param reader the ImageReader the callback is associated with.
         * @see ImageReader
         * @see Image
         */
        void onImageAvailable(ImageReader reader);
    }
  • 在對上API接口openCamera函數之前要設置mImageReader
                mImageReader = ImageReader.newInstance(largest.getWidth(), largest.getHeight(),
                        ImageFormat.JPEG, /*maxImages*/2);
                mImageReader.setOnImageAvailableListener(
                        mOnImageAvailableListener, mBackgroundHandler);
  • ImageReader中有一個getSurface()函數,這是ImageReader的拍照輸出流,我們拍照的時候一般有兩個輸出流(outputSurface對象),一個是預覽流,還有一個是拍照流。可以參考之前的createCaptureSession模塊,ImageReader設置的拍照流會設置到camera service端。
    public Surface getSurface() {
        return mSurface;
    }

ImageReader回調接口

  • 看上面ImageReader的調用流程,調用到ImageReader.OnImageAvailableListener->onImageAvailable中,我們獲取ImageReader->acquireNextImage可以獲取采集的image圖片。其實ImageReader中也可以獲取預覽的流式數據。SurfacePlane 封裝了返回的ByteBuffer數據,可供開發者實時獲取。
private class SurfacePlane extends android.media.Image.Plane {
            private SurfacePlane(int rowStride, int pixelStride, ByteBuffer buffer) {
                mRowStride = rowStride;
                mPixelStride = pixelStride;
                mBuffer = buffer;
                /**
                 * Set the byteBuffer order according to host endianness (native
                 * order), otherwise, the byteBuffer order defaults to
                 * ByteOrder.BIG_ENDIAN.
                 */
                mBuffer.order(ByteOrder.nativeOrder());
            }

            @Override
            public ByteBuffer getBuffer() {
                throwISEIfImageIsInvalid();
                return mBuffer;
            }
            final private int mPixelStride;
            final private int mRowStride;

            private ByteBuffer mBuffer;
}
  • 很多開發者在camera1使用Camera.PreviewCallback的void onPreviewFrame(byte[] data, Camera camera)可以獲取實時數據,但是在camera2中沒有這個接口了,雖然camera1的接口方法也能用,camera2替代的接口就是ImageReader.OnImageAvailableListener->onImageAvailable。

3.5 其他介紹


免責聲明!

本站轉載的文章為個人學習借鑒使用,本站對版權不負任何法律責任。如果侵犯了您的隱私權益,請聯系本站郵箱yoyou2525@163.com刪除。



 
粵ICP備18138465號   © 2018-2025 CODEPRJ.COM