AVCaptureSession拍照,攝像,載圖總結


AVCaptureSession
[IOS開發]拍照,攝像,載圖總結

1 建立Session 

2 添加 input 

3 添加output 

4 開始捕捉

5 為用戶顯示當前錄制狀態

6 捕捉

7 結束捕捉

8 參考 

 

1 建立Session 

1.1 聲明session 

AVCaptureSession *session = [[AVCaptureSession alloc] init];

 

// Add inputs and outputs.

 

[session startRunning];

 

1.2 設置采集的質量 

Symbol

Resolution

Comments

AVCaptureSessionPresetHigh

High

Highest recording quality.

This varies per device.

AVCaptureSessionPresetMedium

Medium

Suitable for WiFi sharing.

The actual values may change.

AVCaptureSessionPresetLow

Low

Suitable for 3G sharing.

The actual values may change.

AVCaptureSessionPreset640x480

640x480

VGA.

AVCaptureSessionPreset1280x720

1280x720

720p HD.

AVCaptureSessionPresetPhoto

Photo

Full photo resolution.

This is not supported for video output.

 

if ([session canSetSessionPreset:AVCaptureSessionPreset1280x720]) {

 

    session.sessionPreset = AVCaptureSessionPreset1280x720;

 

}

 

else {

 

    // Handle the failure.

 

}

 

1.3 重新設置session 

[session beginConfiguration];

 

// Remove an existing capture device.

 

// Add a new capture device.

 

// Reset the preset.

 

[session commitConfiguration];

 

 

添加input 

2.1 配置一個device (查找前后攝像頭

NSArray *devices = [AVCaptureDevice devices];

 

 

 

for (AVCaptureDevice *device in devices) {

 

 

 

    NSLog(@"Device name: %@", [device localizedName]);

 

 

 

    if ([device hasMediaType:AVMediaTypeVideo]) {

 

 

 

        if ([device position] == AVCaptureDevicePositionBack) {

 

            NSLog(@"Device position : back");

 

        }

 

        else {

 

            NSLog(@"Device position : front");

 

        }

 

    }

 

}

 

2.2 設備的前后切換 Switching Between Devices

AVCaptureSession *session = <#A capture session#>;

[session beginConfiguration];

 

[session removeInput:frontFacingCameraDeviceInput];

[session addInput:backFacingCameraDeviceInput];

 

[session commitConfiguration];

 

2.3 添加輸入設備到當前session 

NSError *error;

 

AVCaptureDeviceInput *input =

 

        [AVCaptureDeviceInput deviceInputWithDevice:device error:&error];

 

if (!input) {

 

    // Handle the error appropriately.

 

}

 

AVCaptureSession *captureSession = <#Get a capture session#>;

 

AVCaptureDeviceInput *captureDeviceInput = <#Get a capture device input#>;

 

if ([captureSession canAddInput:captureDeviceInput]) {

 

    [captureSession addInput:captureDeviceInput];

 

}

 

else {

 

    // Handle the failure.

 

}

 

添加輸出設備到session 

AVCaptureMovieFileOutput to output to a movie file (輸出一個 視頻文件) 

AVCaptureVideoDataOutput if you want to process frames from the video being captured (可以采集數據從指定的視頻中) 

AVCaptureAudioDataOutput if you want to process the audio data being captured (采集音頻) 

AVCaptureStillImageOutput if you want to capture still images with accompanying metadata (采集靜態圖片) 

 

3.1 添加一個output 到session 

 

AVCaptureSession *captureSession = <#Get a capture session#>;

 

AVCaptureMovieFileOutput *movieOutput = <#Create and configure a movie output#>;

 

if ([captureSession canAddOutput:movieOutput]) {

    [captureSession addOutput:movieOutput];

}

else {

 

    // Handle the failure.

 

}

 

 

3.2 保存視頻到文件  Saving to a Movie File

 

3.2.1 聲明一個輸出

AVCaptureMovieFileOutput *aMovieFileOutput = [[AVCaptureMovieFileOutput alloc] init];

 

CMTime maxDuration = <#Create a CMTime to represent the maximum duration#>;

 

aMovieFileOutput.maxRecordedDuration = maxDuration;

 

aMovieFileOutput.minFreeDiskSpaceLimit = <#An appropriate minimum given the quality of the movie format and the duration#>;

 

 

3.2.2 配置寫到指定的文件

AVCaptureMovieFileOutput *aMovieFileOutput = <#Get a movie file output#>;

 

NSURL *fileURL = <#A file URL that identifies the output location#>;

 

[aMovieFileOutput startRecordingToOutputFileURL:fileURL recordingDelegate:<#The delegate#>];

 

 

3.2.3 確定文件是否寫成功

 

captureOutput:didFinishRecordingToOutputFileAtURL:fromConnections:error:  實現這個方法

 

- (void)captureOutput:(AVCaptureFileOutput *)captureOutput

 

        didFinishRecordingToOutputFileAtURL:(NSURL *)outputFileURL

 

        fromConnections:(NSArray *)connections

 

        error:(NSError *)error {

 

 

 

    BOOL recordedSuccessfully = YES;

 

    if ([error code] != noErr) {

 

        // A problem occurred: Find out if the recording was successful.

 

        id value = [[error userInfo] objectForKey:AVErrorRecordingSuccessfullyFinishedKey];

 

        if (value) {

 

            recordedSuccessfully = [value boolValue];

 

        }

 

    }

 

    // Continue as appropriate...

 

 

 

3.3 對采集載圖

 

3.3.1 設置采集圖片的像素格式 

 

說實話下面這段的像素格式我也似懂非懂, 感覺是不是像素的對像素的質量會有一些影響

You can use the videoSettings property to specify a custom output format. The video settings property is a dictionary; currently, the only supported key is kCVPixelBufferPixelFormatTypeKey. The recommended pixel format choices for iPhone 4 are kCVPixelFormatType_420YpCbCr8BiPlanarVideoRange or kCVPixelFormatType_32BGRA; for iPhone 3G the recommended pixel format choices are kCVPixelFormatType_422YpCbCr8or kCVPixelFormatType_32BGRA. Both Core Graphics and OpenGL work well with the BGRA format:

 

// Create a VideoDataOutput and add it to the session

    AVCaptureVideoDataOutput *output = [[[AVCaptureVideoDataOutput alloc] init] autorelease];

    [session addOutput:output];

    

    // Configure your output.

    dispatch_queue_t queue = dispatch_queue_create("myQueue", NULL);

    [output setSampleBufferDelegate:self queue:queue];

    dispatch_release(queue);

    

    // Specify the pixel format

    output.videoSettings =

    [NSDictionary dictionaryWithObject:

     [NSNumber numberWithInt:kCVPixelFormatType_32BGRA]

                                forKey:(id)kCVPixelBufferPixelFormatTypeKey];

    

 

3.3.2 采集靜態圖片

 

AVCaptureStillImageOutput  這個類可以采集靜態圖片。

 

 

Preset

iPhone 3G

iPhone 3GS

iPhone 4 (Back)

iPhone 4 (Front)

High

400x304

640x480

1280x720

640x480

Medium

400x304

480x360

480x360

480x360

Low

400x304

192x144

192x144

192x144

640x480

N/A

640x480

640x480

640x480

1280x720

N/A

N/A

1280x720

N/A

Photo

1600x1200

2048x1536

2592x1936

640x480

Pixel and Encoding Formats

Different devices support different image formats:

iPhone 3G

iPhone 3GS

iPhone 4

yuvs, 2vuy, BGRA, jpeg

420f, 420v, BGRA, jpeg

420f, 420v, BGRA, jpeg

 

 

可以自己指定想要捕捉的格式, 下面就可以指定捕捉一個JPEG的圖片

 

AVCaptureStillImageOutput *stillImageOutput = [[AVCaptureStillImageOutput alloc] init];

 

NSDictionary *outputSettings = @{ AVVideoCodecKey : AVVideoCodecJPEG};

 

[stillImageOutput setOutputSettings:outputSettings];

 

如果使用JPEG圖片格式, 就不應該再指定其它的壓縮了, output 會自動壓縮, 這個壓縮會使用硬件加速。 而我們要使用這個圖片數據時。 可以使用jpegStillImageNSDataRepresentation: 這個方法來獲取相應的NSData,這個方法不會做重復壓縮的動作。 

 

jpegStillImageNSDataRepresentation:

Returns an NSData representation of a still image data and metadata attachments in a JPEG sample buffer.

+ (NSData *)jpegStillImageNSDataRepresentation:(CMSampleBufferRef)jpegSampleBuffer

Parameters

jpegSampleBuffer

The sample buffer carrying JPEG image data, optionally with Exif metadata sample buffer attachments.

This method throws an NSInvalidArgumentException if jpegSampleBuffer is NULL or not in the JPEG format.

Return Value

An NSData representation of jpegSampleBuffer.

Discussion

This method merges the image data and Exif metadata sample buffer attachments without re-compressing the image.

The returned NSData object is suitable for writing to disk.

 

 

捕捉圖片 Capturing an Image

When you want to capture an image, you send the output a captureStillImageAsynchronouslyFromConnection:completionHandler: message. The first argument is the connection you want to use for the capture. You need to look for the connection whose input port is collecting video:

AVCaptureConnection *videoConnection = nil;

 

for (AVCaptureConnection *connection in stillImageOutput.connections) {

 

    for (AVCaptureInputPort *port in [connection inputPorts]) {

 

        if ([[port mediaType] isEqual:AVMediaTypeVideo] ) {

 

            videoConnection = connection;

 

            break;

 

        }

 

    }

 

    if (videoConnection) { break; }

 

}

 

The second argument to captureStillImageAsynchronouslyFromConnection:completionHandler: is a block that takes two arguments: a CMSampleBuffer containing the image data, and an error. The sample buffer itself may contain metadata, such as an Exif dictionary, as an attachment. You can modify the attachments should you want, but note the optimization for JPEG images discussed in “Pixel and Encoding Formats.”

[stillImageOutput captureStillImageAsynchronouslyFromConnection:videoConnection completionHandler:

 

    ^(CMSampleBufferRef imageSampleBuffer, NSError *error) {

 

        CFDictionaryRef exifAttachments =

 

            CMGetAttachment(imageSampleBuffer, kCGImagePropertyExifDictionary, NULL);

 

        if (exifAttachments) {

 

            // Do something with the attachments.

 

        }

 

        // Continue as appropriate.

 

    }];

 

 

 

5 為用戶顯示當前的錄制狀態

 

5.1 錄制預覽

 

AVCaptureSession *captureSession = <#Get a capture session#>;

 

CALayer *viewLayer = <#Get a layer from the view in which you want to present the preview#>;

 

 

 

AVCaptureVideoPreviewLayer *captureVideoPreviewLayer = [[AVCaptureVideoPreviewLayer alloc] initWithSession:captureSession];

 

[viewLayer addSublayer:captureVideoPreviewLayer];

 

Video Gravity Modes

The preview layer supports three gravity modes that you set using videoGravity:

 

 

6 捕捉

下面是一個完整的過程

 

Putting it all Together: Capturing Video Frames as UIImage Objects

This brief code example to illustrates how you can capture video and convert the frames you get to UIImage objects. It shows you how to:

Note: To focus on the most relevant code, this example omits several aspects of a complete application, including memory management. To use AV Foundation, you are expected to have enough experience with Cocoa to be able to infer the missing pieces.

 

Create and Configure a Capture Session

You use an AVCaptureSession object to coordinate the flow of data from an AV input device to an output. Create a session, and configure it to produce medium resolution video frames.

AVCaptureSession *session = [[AVCaptureSession alloc] init];

 

session.sessionPreset = AVCaptureSessionPresetMedium;

 

Create and Configure the Device and Device Input

Capture devices are represented by AVCaptureDevice objects; the class provides methods to retrieve an object for the input type you want. A device has one or more ports, configured using an AVCaptureInput object. Typically, you use the capture input in its default configuration.

Find a video capture device, then create a device input with the device and add it to the session.

AVCaptureDevice *device =

 

        [AVCaptureDevice defaultDeviceWithMediaType:AVMediaTypeVideo];

 

 

 

NSError *error = nil;

 

AVCaptureDeviceInput *input =

 

        [AVCaptureDeviceInput deviceInputWithDevice:device error:&error];

 

if (!input) {

 

    // Handle the error appropriately.

 

}

 

[session addInput:input];

 

Create and Configure the Data Output

You use an AVCaptureVideoDataOutput object to process uncompressed frames from the video being captured. You typically configure several aspects of an output. For video, for example, you can specify the pixel format using the videoSettings property, and cap the frame rate by setting the minFrameDuration property.

Create and configure an output for video data and add it to the session; cap the frame rate to 15 fps by setting the minFrameDuration property to 1/15 second:

AVCaptureVideoDataOutput *output = [[AVCaptureVideoDataOutput alloc] init];

 

[session addOutput:output];

 

output.videoSettings =

 

                @{ (NSString *)kCVPixelBufferPixelFormatTypeKey : @(kCVPixelFormatType_32BGRA) };

 

output.minFrameDuration = CMTimeMake(1, 15);

 

The data output object uses delegation to vend the video frames. The delegate must adopt the AVCaptureVideoDataOutputSampleBufferDelegate protocol. When you set the data output’s delegate, you must also provide a queue on which callbacks should be invoked.

dispatch_queue_t queue = dispatch_queue_create("MyQueue", NULL);

 

[output setSampleBufferDelegate:self queue:queue];

 

dispatch_release(queue);

 

You use the queue to modify the priority given to delivering and processing the video frames.

Implement the Sample Buffer Delegate Method

In the delegate class, implement the method (captureOutput:didOutputSampleBuffer:fromConnection:) that is called when a sample buffer is written. The video data output object delivers frames as CMSampleBuffers, so you need to convert from the CMSampleBuffer to a UIImage object. The function for this operation is shown in “Converting a CMSampleBuffer to a UIImage.”

- (void)captureOutput:(AVCaptureOutput *)captureOutput

 

         didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer

 

         fromConnection:(AVCaptureConnection *)connection {

 

 

 

    UIImage *image = imageFromSampleBuffer(sampleBuffer);

 

    // Add your code here that uses the image.

 

}

 

Remember that the delegate method is invoked on the queue you specified in setSampleBufferDelegate:queue:; if you want to update the user interface, you must invoke any relevant code on the main thread.

Starting and Stopping Recording

After configuring the capture session, you send it a startRunning message to start the recording.

[session startRunning];

 

To stop recording, you send the session a stopRunning message.

 

 

DEMO Code

這個demo 運行時出了一個問題,- (void)captureOutput:(AVCaptureOutput *)captureOutput didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer        fromConnection:(AVCaptureConnection *)connection 里載到圖后把圖片轉換為UIImage后轉出來后怎么都不顯示圖片, 經查后直接轉為NSData傳出來后一切正常, 特別說明一下。 

 

// Create and configure a capture session and start it running

- (void)setupCaptureSession

{

    NSError *error = nil;

    

    // Create the session

    AVCaptureSession *session = [[AVCaptureSession alloc] init];

 

    

    // Configure the session to produce lower resolution video frames, if your

    // processing algorithm can cope. We'll specify medium quality for the

    // chosen device.

    session.sessionPreset = AVCaptureSessionPresetLow; 

    

    // Find a suitable AVCaptureDevice

    AVCaptureDevice *device = [AVCaptureDevice

                               defaultDeviceWithMediaType:AVMediaTypeVideo];

    

    // Create a device input with the device and add it to the session.

    AVCaptureDeviceInput *input = [AVCaptureDeviceInput deviceInputWithDevice:device

                                                                        error:&error];

    if (!input) {

        // Handling the error appropriately.

    }

    [session addInput:input];

    

    // Create a VideoDataOutput and add it to the session

    AVCaptureVideoDataOutput *output = [[[AVCaptureVideoDataOutput alloc] init] autorelease];

    [session addOutput:output];

    

    // Configure your output.

    dispatch_queue_t queue = dispatch_queue_create("myQueue", NULL);

    [output setSampleBufferDelegate:self queue:queue];

    dispatch_release(queue);

    

    // Specify the pixel format

    output.videoSettings =

    [NSDictionary dictionaryWithObject:

     [NSNumber numberWithInt:kCVPixelFormatType_32BGRA]

                                forKey:(id)kCVPixelBufferPixelFormatTypeKey];

    

    

    // 添加界面顯示

    AVCaptureVideoPreviewLayer *previewLayer = nil; 

    previewLayer = [[[AVCaptureVideoPreviewLayer alloc] initWithSession:session] autorelease];

    [previewLayer setVideoGravity:AVLayerVideoGravityResizeAspectFill];

    CGRect layerRect = [[[self view] layer] bounds];

    [previewLayer setBounds:layerRect];

    [previewLayer setPosition:CGPointMake(CGRectGetMidX(layerRect),CGRectGetMidY(layerRect))];

[[[self view] layer] addSublayer:previewLayer];

    

 

    // If you wish to cap the frame rate to a known value, such as 15 fps, set

    // minFrameDuration.

//    output.minFrameDuration = CMTimeMake(1, 15);

    

    // Start the session running to start the flow of data

    [session startRunning];

    

    sessionGlobal = session; 

    // Assign session to an ivar.

   //  [self setSession:session];

    isCapture = FALSE;

    UIView *v = [[UIView alloc] initWithFrame:CGRectMake(0, 0, 300, 300)];

    v.backgroundColor = [UIColor blueColor];

    v.layer.masksToBounds = YES;

    v1 = [v retain]; 

 

    

    [self.view addSubview:v];

   // [v release];

 

    start = [[NSDate date] timeIntervalSince1970];

    before = start;

    num = 0; 

}

 

 (NSTimeInterval)getTimeFromStart

{

    NSDate* dat = [NSDate dateWithTimeIntervalSinceNow:0];

    NSTimeInterval now = [dat timeIntervalSince1970]*1;

 

    

    NSTimeInterval b = now - start;

    return b; 

}

 

- (void)showImage:(NSData *)topImageData

{

    if(num > 5)

    {

        [sessionGlobal stopRunning]; 

        return;

    }

    num ++;

    

    

    NSString *numStr = [NSString stringWithFormat:@"%d.jpg", num];

    NSString *path = [NSHomeDirectory() stringByAppendingPathComponent:numStr];

    NSLog(@"PATH : %@", path);

    [topImageData writeToFile:path atomically:YES];

    

    

    UIImageView *imageView = [[UIImageView alloc] initWithFrame:CGRectMake(0, 0, 100, 100)];

    imageView.layer.masksToBounds = YES;

    imageView.backgroundColor = [UIColor redColor];

    UIImage *img = [[UIImage alloc] initWithData:topImageData];

    imageView.image = img;

    [img release]; 

    [self.view addSubview:imageView];

    [imageView release];

    [self.view setNeedsDisplay]; 

//    [v1 setNeedsDisplay];

}

 

// Delegate routine that is called when a sample buffer was written

- (void)captureOutput:(AVCaptureOutput *)captureOutput

didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer

       fromConnection:(AVCaptureConnection *)connection

{

    NSDate* dat = [NSDate dateWithTimeIntervalSinceNow:0];

    NSTimeInterval now = [dat timeIntervalSince1970]*1;

    NSLog(@" before: %f  num: %f" , before, now - before);

    

    if((now - before) > 5)

    {

        before = [[NSDate date] timeIntervalSince1970];

        

        

        // Create a UIImage from the sample buffer data

        UIImage *image = [self imageFromSampleBuffer:sampleBuffer];

        if(image != nil)

        { //            NSTimeInterval t = [self getTimeFromStart];

            NSData* topImageData = UIImageJPEGRepresentation(image, 1.0);

            [self performSelectorOnMainThread:@selector(showImage:) withObject:topImageData waitUntilDone:NO];

 

        }

    }

    

}

 

// Create a UIImage from sample buffer data

- (UIImage *) imageFromSampleBuffer:(CMSampleBufferRef) sampleBuffer

{

    CVImageBufferRef imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);

    // Lock the base address of the pixel buffer

    CVPixelBufferLockBaseAddress(imageBuffer,0);

    

    // Get the number of bytes per row for the pixel buffer

    size_t bytesPerRow = CVPixelBufferGetBytesPerRow(imageBuffer);

    // Get the pixel buffer width and height

    size_t width = CVPixelBufferGetWidth(imageBuffer);

    size_t height = CVPixelBufferGetHeight(imageBuffer);

    

    // Create a device-dependent RGB color space

    CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();

    if (!colorSpace)

    {

        NSLog(@"CGColorSpaceCreateDeviceRGB failure");

        return nil;

    }

    

    // Get the base address of the pixel buffer

    void *baseAddress = CVPixelBufferGetBaseAddress(imageBuffer);

    // Get the data size for contiguous planes of the pixel buffer.

    size_t bufferSize = CVPixelBufferGetDataSize(imageBuffer);

    

    // Create a Quartz direct-access data provider that uses data we supply

    CGDataProviderRef provider = CGDataProviderCreateWithData(NULL, baseAddress, bufferSize,

                                                              NULL);

    // Create a bitmap image from data supplied by our data provider

    CGImageRef cgImage =

    CGImageCreate(width,

                  height,

                  8,

                  32,

                  bytesPerRow,

                  colorSpace,

                  kCGImageAlphaNoneSkipFirst | kCGBitmapByteOrder32Little,

                  provider,

                  NULL,

                  true,

                  kCGRenderingIntentDefault);

    CGDataProviderRelease(provider);

    CGColorSpaceRelease(colorSpace);

    

    // Create and return an image object representing the specified Quartz image

    UIImage *image = [UIImage imageWithCGImage:cgImage];

    CGImageRelease(cgImage);

    

    CVPixelBufferUnlockBaseAddress(imageBuffer, 0);

    

    return image;

}

 

 

7 結束捕捉 

- (void)stopVideoCapture:(id)arg

{

//停止攝像頭捕抓

if(self->avCaptureSession){

[self->avCaptureSession stopRunning];

self->avCaptureSession= nil;

[labelStatesetText:@"Video capture stopped"];

}

8 參考

xcode 自帶文檔 Media Capture

比較完整的捕捉代碼 http://chenweihuacwh.iteye.com/blog/734229

隨便找的2個

http://blog.csdn.net/guo_hongjun1611/article/details/7992294

http://blog.csdn.net/xiaobingbing/article/details/6012798


免責聲明!

本站轉載的文章為個人學習借鑒使用,本站對版權不負任何法律責任。如果侵犯了您的隱私權益,請聯系本站郵箱yoyou2525@163.com刪除。



 
粵ICP備18138465號   © 2018-2025 CODEPRJ.COM