前言:
最近項目有個需求是對試圖對手機密碼進行強破解的人進行拍照(通過攝像頭截圖),因為之前沒做過,所以一堆坑。現在就把我的經驗都分享出來,希望后來人不用再踏上坑途中。
直接上代碼:
// 創建會話
self.session = [[AVCaptureSession alloc] init];
// 獲取攝像頭的權限信息,判斷是否有開啟權限
AVAuthorizationStatus status = [AVCaptureDevice authorizationStatusForMediaType:AVMediaTypeVideo];
if (status == AVAuthorizationStatusAuthorized)
{
self.setupResult = AVCamSetupResultSuccess;
}
if ( self.setupResult != AVCamSetupResultSuccess )
return;
self.backgroundRecordingID = UIBackgroundTaskInvalid;
NSError *error = nil;
// 創建輸入設備
AVCaptureDevice *videoDevice = [self deviceWithMediaType:AVMediaTypeVideo preferringPosition:AVCaptureDevicePositionFront];
AVCaptureDeviceInput *videoDeviceInput = [AVCaptureDeviceInput deviceInputWithDevice:videoDevice error:&error];
// beginConfiguration這個很重要,在addInput或者addOutput的時候一定要調用這個,add之后再調用commitConfiguration
[self.session beginConfiguration];
if ([self.session canAddInput:videoDeviceInput])
{
[self.session addInput:videoDeviceInput];
self.videoDeviceInput = videoDeviceInput;
}
// 為會話加入output設備
dispatch_queue_t videoDataOutputQueue = dispatch_queue_create("VideoDataOutputQueue", DISPATCH_QUEUE_SERIAL);
AVCaptureVideoDataOutput *videoDataOutput = [[AVCaptureVideoDataOutput alloc] init];
videoDataOutput.videoSettings =[NSDictionary dictionaryWithObject:[NSNumber numberWithInt:kCVPixelFormatType_32BGRA]forKey: (id)kCVPixelBufferPixelFormatTypeKey];
// 設置self的AVCaptureVideoDataOutputSampleBufferDelegate
[videoDataOutput setSampleBufferDelegate:self queue:videoDataOutputQueue];
if ([self.session canAddOutput:videoDataOutput])
{
[self.session addOutput:videoDataOutput];
}
[self.session commitConfiguration];
// 獲取前置攝像頭
- (AVCaptureDevice *)deviceWithMediaType:(NSString *)mediaType preferringPosition:(AVCaptureDevicePosition)position{
NSArray *devices = [AVCaptureDevice devicesWithMediaType:mediaType];
AVCaptureDevice *captureDevice = devices.firstObject;
for ( AVCaptureDevice *device in devices ) {
if ( device.position == position ) {
captureDevice = device;
break;
}
}
return captureDevice;
}
// 實現代理方法並獲取圖片
- (void)captureOutput:(AVCaptureOutput *)captureOutput didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)connection{
if (takephoto.isShutScreen)
{
NSLog(@"didOutputSampleBuffer");
UIImage *image = [self getImageBySampleBufferref:sampleBuffer];
}
}
// 將獲取的數據轉換成圖片,網上很多轉換方式轉出來的圖片都是錯誤的,最后找到這個方法
- (UIImage *)getImageBySampleBufferref:(CMSampleBufferRef)sampleBuffer
{
CVImageBufferRef imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
/*Lock the image buffer*/
CVPixelBufferLockBaseAddress(imageBuffer,0);
/*Get information about the image*/
uint8_t *baseAddress = (uint8_t *)CVPixelBufferGetBaseAddress(imageBuffer);
size_t bytesPerRow = CVPixelBufferGetBytesPerRow(imageBuffer);
size_t width = CVPixelBufferGetWidth(imageBuffer);
size_t height = CVPixelBufferGetHeight(imageBuffer);
/*We unlock the image buffer*/
CVPixelBufferUnlockBaseAddress(imageBuffer,0);
/*Create a CGImageRef from the CVImageBufferRef*/
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
CGContextRef newContext = CGBitmapContextCreate(baseAddress, width, height, 8, bytesPerRow, colorSpace, kCGBitmapByteOrder32Little | kCGImageAlphaPremultipliedFirst);
CGImageRef newImage = CGBitmapContextCreateImage(newContext);
/*We release some components*/
CGContextRelease(newContext);
CGColorSpaceRelease(colorSpace);
UIImage *image= [UIImage imageWithCGImage:newImage scale:1.0 orientation:UIImageOrientationRight];
NSLog(@"%@", image);
/*We relase the CGImageRef*/
CGImageRelease(newImage);
return image;
}
只要照着我的思路走,就可以達到攝像機截屏的效果,希望能幫到需要的人,如有其他意見,請留言。
