我的應用是iPad上的,工作在Landscape模式,也就是所謂的橫屏模式,並且相機界面要定制。首先考慮的當然是UIImagePickerController的cameraOverlayView屬性。但是遇到了問題,當iPad旋轉后,自定義的視圖也會跟着旋轉,但是iPhone上就沒有問題。嘗試子類化UIImagePickerController的- (BOOL)shouldAutorotateToInterfaceOrientation:(UIInterfaceOrientation)interfaceOrientation方法卻沒有被調用,很是奇怪,原來官方文檔說:“
Important The UIImagePickerController class supports portrait mode only. This class is intended to be used as-is and does not support subclassing. ”
我就無語了。既然讓自定義視圖,為啥旋轉得不到響應。
最終得知定制視圖還可以通過另一種方式AVFoundation。這個框架比較底層,更加靈活。所以對此進行了學習,通過不懈努力,終於實現了自己的需求。
現在把使用AVFoundation捕捉靜態圖片做個簡單的的介紹:
先來看下AVCaptureSession的工作原理:
AVCaptureSession是用來協調控制捕捉設備和輸出數據的一個類。
捕捉圖像至少需要一下幾個對象:
1 AVCaptureDevice:是輸入設備的抽象,比如攝像機、麥克風
2 AVCaptureInput:用來配置輸入設備
3 AVCaptureOutput:用來管理輸出視頻文件或者靜態圖像
4 AVCaptureSession:協調數據從輸入設備到輸出對象
如果想從攝像頭捕捉一張靜態圖像,那么需要完成一下幾個步驟:
-
Create an
AVCaptureSession
object to coordinate the flow of data from an AV input device to an output(創建AVCaptureSession)對象
-
Find the
AVCaptureDevice
object for the input type you want(獲取輸入設備)
-
Create an
AVCaptureDeviceInput
object for the device(為輸入設備創建
AVCaptureDeviceInput對象
) -
Create an
AVCaptureVideoDataOutput
object to produce video frames(創建輸出對象用來產生視頻幀)
-
Implement a delegate for the
AVCaptureVideoDataOutput
object to process video frames(完成輸出協議,處理視頻幀)
- Implement a function to convert the CMSampleBuffer received by the delegate into a
UIImage
object (完成方法實現CMSampleBuffer到UIImage的轉換)
下面是對應具體實現:
/*創建並配置 Session*/ AVCaptureSession *session = [[AVCaptureSession alloc] init]; session.sessionPreset = AVCaptureSessionPresetMedium;
/*創建並配置輸入設備*/ AVCaptureDevice *device = [AVCaptureDevice defaultDeviceWithMediaType:AVMediaTypeVideo]; NSError *error = nil; AVCaptureDeviceInput *input = [AVCaptureDeviceInput deviceInputWithDevice:device error:&error]; if (!input) { // Handle the error appropriately. } [session addInput:input];
/*創建輸出對象*/
AVCaptureVideoDataOutput *output = [[[AVCaptureVideoDataOutput alloc] init] autorelease]; [session addOutput:output]; output.videoSettings = [NSDictionary dictionaryWithObject:[NSNumber numberWithInt:kCVPixelFormatType_32BGRA] forKey:(id)kCVPixelBufferPixelFormatTypeKey]; output.minFrameDuration = CMTimeMake(1, 15);
dispatch_queue_t queue = dispatch_queue_create("MyQueue", NULL);
[output setSampleBufferDelegate:self queue:queue];
dispatch_release(queue);
/*完成協議*/ - (void)captureOutput:(AVCaptureOutput *)captureOutput didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)connection {
NSAutoreleasePool * pool = [[NSAutoreleasePoolalloc] init];
UIImage *image = imageFromSampleBuffer(sampleBuffer); // Add your code here that uses the image.
[pool drain];
}
/*開始不抓捕*/ [session startRunning];
需要注意的是:創建輸出對象的時候,要設定委托對象,並且要在一個新創建的運行隊列里,所以使用了 dispatch_queue_t。協議回調的時候是運行在這個隊列中的,所以在協議處理函數中要增加內存池。
以上就是基本的用法,下面貼出我根據書上的一個例子改裝的Demo。實現過程中遇到了問題,就是屏幕旋轉的時候,實時畫面反轉錯誤。所以在旋轉的時候,要更改
AVCaptureVideoPreviewLayer的方向。還是看代碼吧!
https://github.com/cokecoffe/ios-demo/tree/master/CameraWithAVFoudation