本文轉自:AVAudioFoundation(3):音視頻編輯 | www.samirchen.com
本文主要內容來自 AVFoundation Programming Guide。
音視頻編輯
上面簡單了解了下 AVFoundation
框架后,我們來看看跟音視頻編輯相關的接口。
一個 composition 可以簡單的認為是一組軌道(tracks)的集合,這些軌道可以是來自不同媒體資源(asset)。AVMutableComposition
提供了接口來插入或者刪除軌道,也可以調整這些軌道的順序。
下面這張圖反映了一個新的 composition 是怎么從已有的 asset 中獲取對應的 track 並進行拼接形成新的 asset。
在處理音頻時,你可以在使用 AVMutableAudioMix
類的接口來做一些自定義的操作,如下圖所示。現在,你可以做到指定一個最大音量或設置一個音頻軌道的音量漸變。
如下圖所示,我們還可以使用 AVMutableVideoComposition
來直接處理 composition 中的視頻軌道。處理一個單獨的 video composition 時,你可以指定它的渲染尺寸、縮放比例、幀率等參數並輸出最終的視頻文件。通過一些針對 video composition 的指令(AVMutableVideoCompositionInstruction 等),我們可以修改視頻的背景顏色、應用 layer instructions。這些 layer instructions(AVMutableVideoCompositionLayerInstruction 等)可以用來對 composition 中的視頻軌道實施圖形變換、添加圖形漸變、透明度變換、增加透明度漸變。此外,你還能通過設置 video composition 的 animationTool
屬性來應用 Core Animation Framework 框架中的動畫效果。
如下圖所示,你可以使用 AVAssetExportSession
相關的接口來合並你的 composition 中的 audio mix 和 video composition。你只需要初始化一個 AVAssetExportSession
對象,然后將其 audioMix
和 videoComposition
屬性分別設置為你的 audio mix 和 video composition 即可。
創建 Composition
上面簡單介紹了集中音視頻編輯的場景,現在我們來詳細介紹具體的接口。從 AVMutableComposition
開始。
當使用 AVMutableComposition
創建自己的 composition 時,最典型的,我們可以使用 AVMutableCompositionTrack
來向 composition 中添加一個或多個 composition tracks,比如下面這個簡單的例子便是向一個 composition 中添加一個音頻軌道和一個視頻軌道:
AVMutableComposition *mutableComposition = [AVMutableComposition composition];
// Create the video composition track.
AVMutableCompositionTrack *mutableCompositionVideoTrack = [mutableComposition addMutableTrackWithMediaType:AVMediaTypeVideo preferredTrackID:kCMPersistentTrackID_Invalid];
// Create the audio composition track.
AVMutableCompositionTrack *mutableCompositionAudioTrack = [mutableComposition addMutableTrackWithMediaType:AVMediaTypeAudio preferredTrackID:kCMPersistentTrackID_Invalid];
當為 composition 添加一個新的 track 的時候,需要設置其媒體類型(media type)和 track ID,主要的媒體類型包括:音頻、視頻、字幕、文本等等。
這里需要注意的是,每個 track 都需要一個唯一的 track ID,比較方便的做法是:設置 track ID 為 kCMPersistentTrackID_Invalid 來為對應的 track 獲得一個自動生成的唯一 ID。
向 Composition 添加視聽數據
要將媒體數據添加到一個 composition track 中需要訪問媒體數據所在的 AVAsset
,可以使用 AVMutableCompositionTrack
的接口將具有相同媒體類型的多個 track 添加到同一個 composition track 中。下面的例子便是從兩個 AVAsset
中各取出一份 video asset track,再添加到一個新的 composition track 中去:
// You can retrieve AVAssets from a number of places, like the camera roll for example.
AVAsset *videoAsset = <#AVAsset with at least one video track#>;
AVAsset *anotherVideoAsset = <#another AVAsset with at least one video track#>;
// Get the first video track from each asset.
AVAssetTrack *videoAssetTrack = [[videoAsset tracksWithMediaType:AVMediaTypeVideo] objectAtIndex:0];
AVAssetTrack *anotherVideoAssetTrack = [[anotherVideoAsset tracksWithMediaType:AVMediaTypeVideo] objectAtIndex:0];
// Add them both to the composition.
[mutableCompositionVideoTrack insertTimeRange:CMTimeRangeMake(kCMTimeZero,videoAssetTrack.timeRange.duration) ofTrack:videoAssetTrack atTime:kCMTimeZero error:nil];
[mutableCompositionVideoTrack insertTimeRange:CMTimeRangeMake(kCMTimeZero,anotherVideoAssetTrack.timeRange.duration) ofTrack:anotherVideoAssetTrack atTime:videoAssetTrack.timeRange.duration error:nil];
檢索兼容的 Composition Tracks
如果可能,每一種媒體類型最好只使用一個 composition track,這樣能夠優化資源的使用。當你連續播放媒體數據時,應該將相同類型的媒體數據放到同一個 composition track 中,你可以通過類似下面的代碼來從 composition 中查找是否有與當前的 asset track 兼容的 composition track,然后拿來使用:
AVMutableCompositionTrack *compatibleCompositionTrack = [mutableComposition mutableTrackCompatibleWithTrack:<#the AVAssetTrack you want to insert#>];
if (compatibleCompositionTrack) {
// Implementation continues.
}
需要注意的是,在同一個 composition track 中添加多個視頻段時,當視頻段之間切換時可能會丟幀,尤其在嵌入式設備上。基於這個問題,應該合理選擇一個 composition track 里的視頻段數量。
設置音量漸變
只使用一個 AVMutableAudioMix
對象就能夠為 composition 中的每一個 audio track 單獨做音頻處理。
下面代碼展示了如果使用 AVMutableAudioMix
給一個 audio track 設置音量漸變給聲音增加一個淡出效果。使用 audioMix
類方法獲取 AVMutableAudioMix
實例;然后使用 AVMutableAudioMixInputParameters
類的 audioMixInputParametersWithTrack:
接口將 AVMutableAudioMix
實例與 composition 中的某一個 audio track 關聯起來;之后便可以通過 AVMutableAudioMix
實例來處理音量了。
AVMutableAudioMix *mutableAudioMix = [AVMutableAudioMix audioMix];
// Create the audio mix input parameters object.
AVMutableAudioMixInputParameters *mixParameters = [AVMutableAudioMixInputParameters audioMixInputParametersWithTrack:mutableCompositionAudioTrack];
// Set the volume ramp to slowly fade the audio out over the duration of the composition.
[mixParameters setVolumeRampFromStartVolume:1.f toEndVolume:0.f timeRange:CMTimeRangeMake(kCMTimeZero, mutableComposition.duration)];
// Attach the input parameters to the audio mix.
mutableAudioMix.inputParameters = @[mixParameters];
自定義視頻處理
處理音頻是我們使用 AVMutableAudioMix
,那么處理視頻時,我們就使用 AVMutableVideoComposition
,只需要一個 AVMutableVideoComposition
實例就可以為 composition 中所有的 video track 做處理,比如設置渲染尺寸、縮放、播放幀率等等。
下面我們依次來看一些場景。
設置視頻背景色
所有的 video composition 也必然對應一組 AVVideoCompositionInstruction
實例,每個 AVVideoCompositionInstruction
中至少包含一條 video composition instruction。我們可以使用 AVMutableVideoCompositionInstruction
來創建我們自己的 video composition instruction,通過這些指令,我們可以修改 composition 的背景顏色、后處理、layer instruction 等等。
下面的實例代碼展示了如何創建 video composition instruction 並將一個 composition 的整個時長都設置為紅色背景色:
AVMutableVideoCompositionInstruction *mutableVideoCompositionInstruction = [AVMutableVideoCompositionInstruction videoCompositionInstruction];
mutableVideoCompositionInstruction.timeRange = CMTimeRangeMake(kCMTimeZero, mutableComposition.duration);
mutableVideoCompositionInstruction.backgroundColor = [[UIColor redColor] CGColor];
設置透明度漸變
我們也可以用 video composition instructions 來應用 video composition layer instructions。AVMutableVideoCompositionLayerInstruction
可以用來設置 video track 的圖形變換、圖形漸變、透明度、透明度漸變等等。一個 video composition instruction 的 layerInstructions
屬性中所存儲的 layer instructions 的順序決定了 tracks 中的視頻幀是如何被放置和組合的。
下面的示例代碼展示了如何在從一個視頻切換到第二個視頻時添加一個透明度漸變的效果:
AVAsset *firstVideoAssetTrack = <#AVAssetTrack representing the first video segment played in the composition#>;
AVAsset *secondVideoAssetTrack = <#AVAssetTrack representing the second video segment played in the composition#>;
// Create the first video composition instruction.
AVMutableVideoCompositionInstruction *firstVideoCompositionInstruction = [AVMutableVideoCompositionInstruction videoCompositionInstruction];
// Set its time range to span the duration of the first video track.
firstVideoCompositionInstruction.timeRange = CMTimeRangeMake(kCMTimeZero, firstVideoAssetTrack.timeRange.duration);
// Create the layer instruction and associate it with the composition video track.
AVMutableVideoCompositionLayerInstruction *firstVideoLayerInstruction = [AVMutableVideoCompositionLayerInstruction videoCompositionLayerInstructionWithAssetTrack:mutableCompositionVideoTrack];
// Create the opacity ramp to fade out the first video track over its entire duration.
[firstVideoLayerInstruction setOpacityRampFromStartOpacity:1.f toEndOpacity:0.f timeRange:CMTimeRangeMake(kCMTimeZero, firstVideoAssetTrack.timeRange.duration)];
// Create the second video composition instruction so that the second video track isn't transparent.
AVMutableVideoCompositionInstruction *secondVideoCompositionInstruction = [AVMutableVideoCompositionInstruction videoCompositionInstruction];
// Set its time range to span the duration of the second video track.
secondVideoCompositionInstruction.timeRange = CMTimeRangeMake(firstVideoAssetTrack.timeRange.duration, CMTimeAdd(firstVideoAssetTrack.timeRange.duration, secondVideoAssetTrack.timeRange.duration));
// Create the second layer instruction and associate it with the composition video track.
AVMutableVideoCompositionLayerInstruction *secondVideoLayerInstruction = [AVMutableVideoCompositionLayerInstruction videoCompositionLayerInstructionWithAssetTrack:mutableCompositionVideoTrack];
// Attach the first layer instruction to the first video composition instruction.
firstVideoCompositionInstruction.layerInstructions = @[firstVideoLayerInstruction];
// Attach the second layer instruction to the second video composition instruction.
secondVideoCompositionInstruction.layerInstructions = @[secondVideoLayerInstruction];
// Attach both of the video composition instructions to the video composition.
AVMutableVideoComposition *mutableVideoComposition = [AVMutableVideoComposition videoComposition];
mutableVideoComposition.instructions = @[firstVideoCompositionInstruction, secondVideoCompositionInstruction];
動畫效果
我們還能通過設置 video composition 的 animationTool
屬性來使用 Core Animation Framework 框架的強大能力。比如:設置視頻水印、視頻標題、動畫浮層等。
在 video composition 中使用 Core Animation 有兩種不同的方式:
- 添加一個 Core Animation Layer 作為獨立的 composition track
- 直接使用 Core Animation Layer 在視頻幀中渲染動畫效果
下面的代碼展示了后面一種使用方式,在視頻區域的中心添加水印:
CALayer *watermarkLayer = <#CALayer representing your desired watermark image#>;
CALayer *parentLayer = [CALayer layer];
CALayer *videoLayer = [CALayer layer];
parentLayer.frame = CGRectMake(0, 0, mutableVideoComposition.renderSize.width, mutableVideoComposition.renderSize.height);
videoLayer.frame = CGRectMake(0, 0, mutableVideoComposition.renderSize.width, mutableVideoComposition.renderSize.height);
[parentLayer addSublayer:videoLayer];
watermarkLayer.position = CGPointMake(mutableVideoComposition.renderSize.width/2, mutableVideoComposition.renderSize.height/4);
[parentLayer addSublayer:watermarkLayer];
mutableVideoComposition.animationTool = [AVVideoCompositionCoreAnimationTool videoCompositionCoreAnimationToolWithPostProcessingAsVideoLayer:videoLayer inLayer:parentLayer];
一個完整示例
這里的示例將展示如何合並兩個 video asset tracks 和一個 audio asset track 到一個視頻文件,其中大體步驟如下:
- 創建一個
AVMutableComposition
對象,添加多個AVMutableCompositionTrack
對象 - 在各個 composition tracks 中添加
AVAssetTrack
對應的時間范圍 - 檢查 video asset track 的
preferredTransform
屬性決定視頻方向 - 使用
AVMutableVideoCompositionLayerInstruction
對象對視頻進行圖形變換 - 設置 video composition 的
renderSize
和frameDuration
屬性 - 導出視頻文件
- 保存視頻文件到相冊
下面的示例代碼省略了一些內存管理和通知移除相關的代碼。
// 1、創建 composition。創建一個 composition,並添加一個 audio track 和一個 video track。
AVMutableComposition *mutableComposition = [AVMutableComposition composition];
AVMutableCompositionTrack *videoCompositionTrack = [mutableComposition addMutableTrackWithMediaType:AVMediaTypeVideo preferredTrackID:kCMPersistentTrackID_Invalid];
AVMutableCompositionTrack *audioCompositionTrack = [mutableComposition addMutableTrackWithMediaType:AVMediaTypeAudio preferredTrackID:kCMPersistentTrackID_Invalid];
// 2、添加 asset。從源 assets 中取得兩個 video track 和一個 audio track,在上面的 video composition track 中依次添加兩個 video track,在 audio composition track 中添加一個 video track。
AVAsset *firstVideoAsset = <#First AVAsset with at least one video track#>;
AVAsset *secondVideoAsset = <#Second AVAsset with at least one video track#>;
AVAsset *audioAsset = <#AVAsset with at least one audio track#>;
AVAssetTrack *firstVideoAssetTrack = [[firstVideoAsset tracksWithMediaType:AVMediaTypeVideo] objectAtIndex:0];
AVAssetTrack *secondVideoAssetTrack = [[secondVideoAsset tracksWithMediaType:AVMediaTypeVideo] objectAtIndex:0];
AVAssetTrack *audioAssetTrack = [[audioAsset tracksWithMediaType:AVMediaTypeAudio] objectAtIndex:0]
[videoCompositionTrack insertTimeRange:CMTimeRangeMake(kCMTimeZero, firstVideoAssetTrack.timeRange.duration) ofTrack:firstVideoAssetTrack atTime:kCMTimeZero error:nil];
[videoCompositionTrack insertTimeRange:CMTimeRangeMake(kCMTimeZero, secondVideoAssetTrack.timeRange.duration) ofTrack:secondVideoAssetTrack atTime:firstVideoAssetTrack.timeRange.duration error:nil];
[audioCompositionTrack insertTimeRange:CMTimeRangeMake(kCMTimeZero, CMTimeAdd(firstVideoAssetTrack.timeRange.duration, secondVideoAssetTrack.timeRange.duration)) ofTrack:audioAssetTrack atTime:kCMTimeZero error:nil];
// 3、檢查 composition 方向。在 composition 中添加了 audio track 和 video track 后,還必須確保其中所有的 video track 的視頻方向都是一致的。在默認情況下 video track 默認為橫屏模式,如果這時添加進來的 video track 是在豎屏模式下采集的,那么導出的視頻會出現方向錯誤。同理,將一個橫向的視頻和一個縱向的視頻進行合並導出,export session 會報錯。
BOOL isFirstVideoPortrait = NO;
CGAffineTransform firstTransform = firstVideoAssetTrack.preferredTransform;
// Check the first video track's preferred transform to determine if it was recorded in portrait mode.
if (firstTransform.a == 0 && firstTransform.d == 0 && (firstTransform.b == 1.0 || firstTransform.b == -1.0) && (firstTransform.c == 1.0 || firstTransform.c == -1.0)) {
isFirstVideoPortrait = YES;
}
BOOL isSecondVideoPortrait = NO;
CGAffineTransform secondTransform = secondVideoAssetTrack.preferredTransform;
// Check the second video track's preferred transform to determine if it was recorded in portrait mode.
if (secondTransform.a == 0 && secondTransform.d == 0 && (secondTransform.b == 1.0 || secondTransform.b == -1.0) && (secondTransform.c == 1.0 || secondTransform.c == -1.0)) {
isSecondVideoPortrait = YES;
}
if ((isFirstVideoAssetPortrait && !isSecondVideoAssetPortrait) || (!isFirstVideoAssetPortrait && isSecondVideoAssetPortrait)) {
UIAlertView *incompatibleVideoOrientationAlert = [[UIAlertView alloc] initWithTitle:@"Error!" message:@"Cannot combine a video shot in portrait mode with a video shot in landscape mode." delegate:self cancelButtonTitle:@"Dismiss" otherButtonTitles:nil];
[incompatibleVideoOrientationAlert show];
return;
}
// 4、應用 Video Composition Layer Instructions。一旦你知道你要合並的視頻片段的方向是兼容的,那么你接下來就可以為每個片段應用必要的 layer instructions,並將這些 layer instructions 添加到 video composition 中。
// 所有的 `AVAssetTrack` 對象都有一個 `preferredTransform` 屬性,包含了 asset track 的方向信息。這個 transform 會在 asset track 在屏幕上展示時被應用。在下面的代碼中,layer instruction 的 transform 被設置為 asset track 的 transform,便於在你修改了視頻尺寸時,新的 composition 中的視頻也能正確的進行展示。
AVMutableVideoCompositionInstruction *firstVideoCompositionInstruction = [AVMutableVideoCompositionInstruction videoCompositionInstruction];
// Set the time range of the first instruction to span the duration of the first video track.
firstVideoCompositionInstruction.timeRange = CMTimeRangeMake(kCMTimeZero, firstVideoAssetTrack.timeRange.duration);
AVMutableVideoCompositionInstruction *secondVideoCompositionInstruction = [AVMutableVideoCompositionInstruction videoCompositionInstruction];
// Set the time range of the second instruction to span the duration of the second video track.
secondVideoCompositionInstruction.timeRange = CMTimeRangeMake(firstVideoAssetTrack.timeRange.duration, CMTimeAdd(firstVideoAssetTrack.timeRange.duration, secondVideoAssetTrack.timeRange.duration));
// 創建兩個 video layer instruction,關聯對應的 video composition track,並設置 transform 為 preferredTransform。
AVMutableVideoCompositionLayerInstruction *firstVideoLayerInstruction = [AVMutableVideoCompositionLayerInstruction videoCompositionLayerInstructionWithAssetTrack:videoCompositionTrack];
// Set the transform of the first layer instruction to the preferred transform of the first video track.
[firstVideoLayerInstruction setTransform:firstTransform atTime:kCMTimeZero];
AVMutableVideoCompositionLayerInstruction *secondVideoLayerInstruction = [AVMutableVideoCompositionLayerInstruction videoCompositionLayerInstructionWithAssetTrack:videoCompositionTrack];
// Set the transform of the second layer instruction to the preferred transform of the second video track.
[secondVideoLayerInstruction setTransform:secondTransform atTime:firstVideoAssetTrack.timeRange.duration];
firstVideoCompositionInstruction.layerInstructions = @[firstVideoLayerInstruction];
secondVideoCompositionInstruction.layerInstructions = @[secondVideoLayerInstruction];
AVMutableVideoComposition *mutableVideoComposition = [AVMutableVideoComposition videoComposition];
mutableVideoComposition.instructions = @[firstVideoCompositionInstruction, secondVideoCompositionInstruction];
// 5、設置渲染尺寸和幀率。要完全解決視頻方向問題,你還需要調整 video composition 的 `renderSize` 屬性,同時也需要設置一個合適的 `frameDuration`,比如 1/30 表示 30 幀每秒。此外,`renderScale` 默認值為 1.0。
CGSize naturalSizeFirst, naturalSizeSecond;
// If the first video asset was shot in portrait mode, then so was the second one if we made it here.
if (isFirstVideoAssetPortrait) {
// Invert the width and height for the video tracks to ensure that they display properly.
naturalSizeFirst = CGSizeMake(firstVideoAssetTrack.naturalSize.height, firstVideoAssetTrack.naturalSize.width);
naturalSizeSecond = CGSizeMake(secondVideoAssetTrack.naturalSize.height, secondVideoAssetTrack.naturalSize.width);
} else {
// If the videos weren't shot in portrait mode, we can just use their natural sizes.
naturalSizeFirst = firstVideoAssetTrack.naturalSize;
naturalSizeSecond = secondVideoAssetTrack.naturalSize;
}
float renderWidth, renderHeight;
// Set the renderWidth and renderHeight to the max of the two videos widths and heights.
if (naturalSizeFirst.width > naturalSizeSecond.width) {
renderWidth = naturalSizeFirst.width;
} else {
renderWidth = naturalSizeSecond.width;
}
if (naturalSizeFirst.height > naturalSizeSecond.height) {
renderHeight = naturalSizeFirst.height;
} else {
renderHeight = naturalSizeSecond.height;
}
mutableVideoComposition.renderSize = CGSizeMake(renderWidth, renderHeight);
// Set the frame duration to an appropriate value (i.e. 30 frames per second for video).
mutableVideoComposition.frameDuration = CMTimeMake(1,30);
// 6、導出 composition 並保持到相冊。創建一個 `AVAssetExportSession` 對象,設置對應的 `outputURL` 來將視頻導出到指定的文件。同時,我們還可以用 `ALAssetsLibrary` 接口來將導出的視頻文件存儲到相冊中去。
// Create a static date formatter so we only have to initialize it once.
static NSDateFormatter *kDateFormatter;
if (!kDateFormatter) {
kDateFormatter = [[NSDateFormatter alloc] init];
kDateFormatter.dateStyle = NSDateFormatterMediumStyle;
kDateFormatter.timeStyle = NSDateFormatterShortStyle;
}
// Create the export session with the composition and set the preset to the highest quality.
AVAssetExportSession *exporter = [[AVAssetExportSession alloc] initWithAsset:mutableComposition presetName:AVAssetExportPresetHighestQuality];
// Set the desired output URL for the file created by the export process.
exporter.outputURL = [[[[NSFileManager defaultManager] URLForDirectory:NSDocumentDirectory inDomain:NSUserDomainMask appropriateForURL:nil create:@YES error:nil] URLByAppendingPathComponent:[kDateFormatter stringFromDate:[NSDate date]]] URLByAppendingPathExtension:CFBridgingRelease(UTTypeCopyPreferredTagWithClass((CFStringRef)AVFileTypeQuickTimeMovie, kUTTagClassFilenameExtension))];
// Set the output file type to be a QuickTime movie.
exporter.outputFileType = AVFileTypeQuickTimeMovie;
exporter.shouldOptimizeForNetworkUse = YES;
exporter.videoComposition = mutableVideoComposition;
// Asynchronously export the composition to a video file and save this file to the camera roll once export completes.
[exporter exportAsynchronouslyWithCompletionHandler:^{
dispatch_async(dispatch_get_main_queue(), ^{
if (exporter.status == AVAssetExportSessionStatusCompleted) {
ALAssetsLibrary *assetsLibrary = [[ALAssetsLibrary alloc] init];
if ([assetsLibrary videoAtPathIsCompatibleWithSavedPhotosAlbum:exporter.outputURL]) {
[assetsLibrary writeVideoAtPathToSavedPhotosAlbum:exporter.outputURL completionBlock:NULL];
}
}
});
}];