這一節來學習MediaCodec的工作原理,相關代碼路徑:
http://aospxref.com/android-12.0.0_r3/xref/frameworks/av/media/libstagefright/MediaCodec.cpp
1、創建mediacodec對象
mediacodec對外給出了兩個靜態方法用於創建mediacodec對象,分別為CreateByType 和 CreateByComponentName,下面分別來看看
CreateByType方法的參數需要給出一個mimetype,並且指定是否為encoder,然后去MediaCodecList中去查詢是否有合適的Codec,接着創建一個mediacodec對象,最后用查到的componentName來初始化mediacodec對象
// static sp<MediaCodec> MediaCodec::CreateByType( const sp<ALooper> &looper, const AString &mime, bool encoder, status_t *err, pid_t pid, uid_t uid) { sp<AMessage> format; return CreateByType(looper, mime, encoder, err, pid, uid, format); } sp<MediaCodec> MediaCodec::CreateByType( const sp<ALooper> &looper, const AString &mime, bool encoder, status_t *err, pid_t pid, uid_t uid, sp<AMessage> format) { Vector<AString> matchingCodecs; MediaCodecList::findMatchingCodecs( mime.c_str(), encoder, 0, format, &matchingCodecs); if (err != NULL) { *err = NAME_NOT_FOUND; } for (size_t i = 0; i < matchingCodecs.size(); ++i) { sp<MediaCodec> codec = new MediaCodec(looper, pid, uid); AString componentName = matchingCodecs[i]; status_t ret = codec->init(componentName); if (err != NULL) { *err = ret; } if (ret == OK) { return codec; } ALOGD("Allocating component '%s' failed (%d), try next one.", componentName.c_str(), ret); } return NULL; }
CreateByComponentName方法其實和上面這個方法類似,因為參數指定了componentName,所以不需要MediaCodec查找的過程。
// static sp<MediaCodec> MediaCodec::CreateByComponentName( const sp<ALooper> &looper, const AString &name, status_t *err, pid_t pid, uid_t uid) { sp<MediaCodec> codec = new MediaCodec(looper, pid, uid); const status_t ret = codec->init(name); if (err != NULL) { *err = ret; } return ret == OK ? codec : NULL; // NULL deallocates codec. }
init
status_t MediaCodec::init(const AString &name) { // 保存componentName mInitName = name; mCodecInfo.clear(); bool secureCodec = false; const char *owner = ""; // 從MediaCodecList中獲取ComponentName對應的codecInfo if (!name.startsWith("android.filter.")) { status_t err = mGetCodecInfo(name, &mCodecInfo); if (err != OK) { mCodec = NULL; // remove the codec. return err; } if (mCodecInfo == nullptr) { ALOGE("Getting codec info with name '%s' failed", name.c_str()); return NAME_NOT_FOUND; } secureCodec = name.endsWith(".secure"); Vector<AString> mediaTypes; mCodecInfo->getSupportedMediaTypes(&mediaTypes); for (size_t i = 0; i < mediaTypes.size(); ++i) { if (mediaTypes[i].startsWith("video/")) { mIsVideo = true; break; } } 獲取ownerName owner = mCodecInfo->getOwnerName(); } // 根據owner來創建一個CodecBase對象 mCodec = mGetCodecBase(name, owner); if (mCodec == NULL) { ALOGE("Getting codec base with name '%s' (owner='%s') failed", name.c_str(), owner); return NAME_NOT_FOUND; } // 根據codecInfo來判讀是否為Video,如果是video則為其創建一個Looper if (mIsVideo) { // video codec needs dedicated looper if (mCodecLooper == NULL) { mCodecLooper = new ALooper; mCodecLooper->setName("CodecLooper"); mCodecLooper->start(false, false, ANDROID_PRIORITY_AUDIO); } mCodecLooper->registerHandler(mCodec); } else { mLooper->registerHandler(mCodec); } mLooper->registerHandler(this); // 給codecbase注冊回調 mCodec->setCallback( std::unique_ptr<CodecBase::CodecCallback>( new CodecCallback(new AMessage(kWhatCodecNotify, this)))); // 獲取codecbase的bufferchannel mBufferChannel = mCodec->getBufferChannel(); // 給bufferchannel注冊回調 mBufferChannel->setCallback( std::unique_ptr<CodecBase::BufferCallback>( new BufferCallback(new AMessage(kWhatCodecNotify, this)))); sp<AMessage> msg = new AMessage(kWhatInit, this); if (mCodecInfo) { msg->setObject("codecInfo", mCodecInfo); // name may be different from mCodecInfo->getCodecName() if we stripped // ".secure" } msg->setString("name", name); // ...... err = PostAndAwaitResponse(msg, &response); return err; }
init方法中做了以下幾件事情:
1、從MediaCodecList中獲取componentName對應的codecInfo(mGetCodecBase是個函數指針,在構造函數中被定義),然后查看codecInfo的類型,判別當前的mediacodec對象是用於video/audio,獲取component對應的owner。為什么要找owner?現在android中有兩套框架用於編解碼,一套是omx,還有一套codec2.0,component owner用於標記其所屬omx還是codec2.0,
//static sp<CodecBase> MediaCodec::GetCodecBase(const AString &name, const char *owner) { if (owner) { if (strcmp(owner, "default") == 0) { return new ACodec; } else if (strncmp(owner, "codec2", 6) == 0) { return CreateCCodec(); } } if (name.startsWithIgnoreCase("c2.")) { return CreateCCodec(); } else if (name.startsWithIgnoreCase("omx.")) { // at this time only ACodec specifies a mime type. return new ACodec; } else if (name.startsWithIgnoreCase("android.filter.")) { return new MediaFilter; } else { return NULL; } }
創建codecbase的代碼並不長,這里就貼出來。可以看到有兩套判斷機制,可以根據owner來判斷,也可以用component的開頭來判斷。
2、給codecbase設置looper,如果是video則為其創建新的Looper,如果是audio則用上層傳下來的Looper,mediacodec自身使用上層傳下來的looper
3、給codecbase注冊回調,注冊的是CodecCallback對象,CodecCallback中保存的AMessage target是mediacodec對象,所以CodecBase發出回調消息,會通過CodecCallback中轉后發給mediaCodec處理
4、獲取codecbase的bufferchannel
5、給bufferchannel注冊回調,注冊的是BufferCallback對象,過程和codecbase的回調相同
6、發送一條kWhatInit消息,到onMessageReceived中處理,將codecInfo和componentName重新打包用於初始化codecbase對象
setState(INITIALIZING); sp<RefBase> codecInfo; (void)msg->findObject("codecInfo", &codecInfo); AString name; CHECK(msg->findString("name", &name)); sp<AMessage> format = new AMessage; if (codecInfo) { format->setObject("codecInfo", codecInfo); } format->setString("componentName", name); mCodec->initiateAllocateComponent(format);
到這里mediacodec的創建就完成了,codecbase如何創建以及初始化的會專門來學習。
2、configure
configure代碼比較長,但是很簡單!這里只貼一點點
sp<AMessage> msg = new AMessage(kWhatConfigure, this); msg->setMessage("format", format); msg->setInt32("flags", flags); msg->setObject("surface", surface); if (crypto != NULL || descrambler != NULL) { if (crypto != NULL) { msg->setPointer("crypto", crypto.get()); } else { msg->setPointer("descrambler", descrambler.get()); } if (mMetricsHandle != 0) { mediametrics_setInt32(mMetricsHandle, kCodecCrypto, 1); } } else if (mFlags & kFlagIsSecure) { ALOGW("Crypto or descrambler should be given for secure codec"); } err = PostAndAwaitResponse(msg, &response);
這個方法做了兩件事:
1、解析出傳入format中的參數信息,保存到mediacodec中
2、重新打包format、surface、crypto等信息,到onMessageReceived中處理
case kWhatConfigure: { sp<RefBase> obj; CHECK(msg->findObject("surface", &obj)); sp<AMessage> format; CHECK(msg->findMessage("format", &format)); // setSurface if (obj != NULL) { if (!format->findInt32(KEY_ALLOW_FRAME_DROP, &mAllowFrameDroppingBySurface)) { // allow frame dropping by surface by default mAllowFrameDroppingBySurface = true; } format->setObject("native-window", obj); status_t err = handleSetSurface(static_cast<Surface *>(obj.get())); if (err != OK) { PostReplyWithError(replyID, err); break; } } else { // we are not using surface so this variable is not used, but initialize sensibly anyway mAllowFrameDroppingBySurface = false; handleSetSurface(NULL); } uint32_t flags; CHECK(msg->findInt32("flags", (int32_t *)&flags)); if (flags & CONFIGURE_FLAG_USE_BLOCK_MODEL) { if (!(mFlags & kFlagIsAsync)) { PostReplyWithError(replyID, INVALID_OPERATION); break; } mFlags |= kFlagUseBlockModel; } mReplyID = replyID; setState(CONFIGURING); // 獲取crypto void *crypto; if (!msg->findPointer("crypto", &crypto)) { crypto = NULL; } // 將crypto設定給bufferchannel mCrypto = static_cast<ICrypto *>(crypto); mBufferChannel->setCrypto(mCrypto); // 獲取解擾信息 void *descrambler; if (!msg->findPointer("descrambler", &descrambler)) { descrambler = NULL; } // 將解擾信息設定給bufferchannel mDescrambler = static_cast<IDescrambler *>(descrambler); mBufferChannel->setDescrambler(mDescrambler); // 從flags中判斷是否為encoder format->setInt32("flags", flags); if (flags & CONFIGURE_FLAG_ENCODE) { format->setInt32("encoder", true); mFlags |= kFlagIsEncoder; } // 獲取csd buffer extractCSD(format); // 判斷是否需要tunnel mode int32_t tunneled; if (format->findInt32("feature-tunneled-playback", &tunneled) && tunneled != 0) { ALOGI("Configuring TUNNELED video playback."); mTunneled = true; } else { mTunneled = false; } int32_t background = 0; if (format->findInt32("android._background-mode", &background) && background) { androidSetThreadPriority(gettid(), ANDROID_PRIORITY_BACKGROUND); } // 調用codecbase的configure方法 mCodec->initiateConfigureComponent(format); break; }
configure是至關重要的,播放器的功能,比如:是否需要surface、tunnel mode、加密播放、以及是否為encoder,都在配置
3、Start
配置完mediacodec狀態被置為CONFIGURED,接下來就可以開始播放了。
setState(STARTING);
mCodec->initiateStart();
start方法比較簡單,將狀態置為STARTING,調用codecbase的start方法。可以猜測,codecbase start成功之后會有個回調將狀態置為STARTED
4、setCallback
setCallback其實應該放在Start之前,因為只有設置了callback之后,上層才能正常使用mediacodec。callback會將底層傳給mediacodec的事件上拋給再上一層,由上層處理事件比如CB_INPUT_AVAILABLE
方法很簡單:
sp<AMessage> callback; CHECK(msg->findMessage("callback", &callback)); mCallback = callback;
5、上層getBuffer
涉及到2組共4個方法 getInputBuffers / getOutputBuffers / getInputBuffer / getOutpuBuffer。
getInputBuffers / getOutputBuffers 用於一次性獲取decoder的輸入輸出buffer數組,codecbase中創建的buffer都由bufferchannel來管理,所以調用的是bufferchannel的getInputBufferArray方法
status_t MediaCodec::getInputBuffers(Vector<sp<MediaCodecBuffer> > *buffers) const { sp<AMessage> msg = new AMessage(kWhatGetBuffers, this); msg->setInt32("portIndex", kPortIndexInput); msg->setPointer("buffers", buffers); sp<AMessage> response; return PostAndAwaitResponse(msg, &response); } case kWhatGetBuffers: { sp<AReplyToken> replyID; CHECK(msg->senderAwaitsResponse(&replyID)); if (!isExecuting() || (mFlags & kFlagIsAsync)) { PostReplyWithError(replyID, INVALID_OPERATION); break; } else if (mFlags & kFlagStickyError) { PostReplyWithError(replyID, getStickyError()); break; } int32_t portIndex; CHECK(msg->findInt32("portIndex", &portIndex)); Vector<sp<MediaCodecBuffer> > *dstBuffers; CHECK(msg->findPointer("buffers", (void **)&dstBuffers)); dstBuffers->clear(); if (portIndex != kPortIndexInput || !mHaveInputSurface) { if (portIndex == kPortIndexInput) { mBufferChannel->getInputBufferArray(dstBuffers); } else { mBufferChannel->getOutputBufferArray(dstBuffers); } } (new AMessage)->postReply(replyID); break; }
getInputBuffer / getOutpuBuffer 根據索引來查找mediacodec buffer隊列中的buffer,隊列中的元素codecbase通過回調方法加入的
status_t MediaCodec::getOutputBuffer(size_t index, sp<MediaCodecBuffer> *buffer) { sp<AMessage> format; return getBufferAndFormat(kPortIndexOutput, index, buffer, &format); } status_t MediaCodec::getBufferAndFormat( size_t portIndex, size_t index, sp<MediaCodecBuffer> *buffer, sp<AMessage> *format) { if (buffer == NULL) { ALOGE("getBufferAndFormat - null MediaCodecBuffer"); return INVALID_OPERATION; } if (format == NULL) { ALOGE("getBufferAndFormat - null AMessage"); return INVALID_OPERATION; } buffer->clear(); format->clear(); if (!isExecuting()) { ALOGE("getBufferAndFormat - not executing"); return INVALID_OPERATION; } Mutex::Autolock al(mBufferLock); std::vector<BufferInfo> &buffers = mPortBuffers[portIndex]; if (index >= buffers.size()) { return INVALID_OPERATION; } const BufferInfo &info = buffers[index]; if (!info.mOwnedByClient) { return INVALID_OPERATION; } *buffer = info.mData; *format = info.mData->format(); return OK; }
6、buffers的處理過程
接下來看看input / output buffer的處理過程
kPortIndexInput
BufferChannel調用BufferCallback的onInputBufferAvailable方法將input buffer加入到隊列中
void BufferCallback::onInputBufferAvailable( size_t index, const sp<MediaCodecBuffer> &buffer) { sp<AMessage> notify(mNotify->dup()); notify->setInt32("what", kWhatFillThisBuffer); notify->setSize("index", index); notify->setObject("buffer", buffer); notify->post(); }
onMessageReceived中的處理不算太長,做了5件事:
case kWhatFillThisBuffer: { // 將buffer加入到mPortBuffers當中,將索引加入到mAvailPortBuffers中 /* size_t index = */updateBuffers(kPortIndexInput, msg); // 如果是flush、stop、release狀態則清除availPortBuffer中的索引,丟棄buffer中的內容 if (mState == FLUSHING || mState == STOPPING || mState == RELEASING) { returnBuffersToCodecOnPort(kPortIndexInput); break; } // 如果包含有 csd buffer,那么會首先將這個buffer寫給decoder,之后就清除csd buffer,下次seek/flush之后可能會再次設定csd buffer if (!mCSD.empty()) { ssize_t index = dequeuePortBuffer(kPortIndexInput); CHECK_GE(index, 0); status_t err = queueCSDInputBuffer(index); if (err != OK) { ALOGE("queueCSDInputBuffer failed w/ error %d", err); setStickyError(err); postActivityNotificationIfPossible(); cancelPendingDequeueOperations(); } break; } // 先處理mLeftover中的buffer、暫時未用到 if (!mLeftover.empty()) { ssize_t index = dequeuePortBuffer(kPortIndexInput); CHECK_GE(index, 0); status_t err = handleLeftover(index); if (err != OK) { setStickyError(err); postActivityNotificationIfPossible(); cancelPendingDequeueOperations(); } break; } // 如果是異步處理buffer,也就是設置了callback,就調用onInputBufferAvailable,通知上層處理,否則等待同步調用 if (mFlags & kFlagIsAsync) { if (!mHaveInputSurface) { if (mState == FLUSHED) { mHavePendingInputBuffers = true; } else { onInputBufferAvailable(); } } } else if (mFlags & kFlagDequeueInputPending) { CHECK(handleDequeueInputBuffer(mDequeueInputReplyID)); ++mDequeueInputTimeoutGeneration; mFlags &= ~kFlagDequeueInputPending; mDequeueInputReplyID = 0; } else { postActivityNotificationIfPossible(); } break; }
1、調用updateBuffers將送上來的inputBuffer保存到mPortBuffers[kPortIndexInput],對應的索引保存到mAvailPortBuffers中
2、判斷當前的狀態是否要丟棄所有的buffer
3、如果有csd buffer,則要先將csd buffer 寫給decoder
4、先將mLeftover中的buffer處理結束,暫時未用到
5、如果設置了callback說明是異步調用,那么調用onInputBufferAvailable通知上層異步處理,否則等待同步調用
void MediaCodec::onInputBufferAvailable() { int32_t index; // 循環處理直到mAvailPortBuffers中沒有索引了 while ((index = dequeuePortBuffer(kPortIndexInput)) >= 0) { sp<AMessage> msg = mCallback->dup(); msg->setInt32("callbackID", CB_INPUT_AVAILABLE); msg->setInt32("index", index); // 通知上層處理 msg->post(); } } ssize_t MediaCodec::dequeuePortBuffer(int32_t portIndex) { CHECK(portIndex == kPortIndexInput || portIndex == kPortIndexOutput); // 獲取mAvailPortBuffers中的第一個可用索引,然后取出mPortBuffers中對應位置的buffer BufferInfo *info = peekNextPortBuffer(portIndex); if (!info) { return -EAGAIN; } List<size_t> *availBuffers = &mAvailPortBuffers[portIndex]; size_t index = *availBuffers->begin(); CHECK_EQ(info, &mPortBuffers[portIndex][index]); // 擦除第一個索引 availBuffers->erase(availBuffers->begin()); // mOwnedByClient要到codecbase中研究 CHECK(!info->mOwnedByClient); { Mutex::Autolock al(mBufferLock); info->mOwnedByClient = true; // set image-data if (info->mData->format() != NULL) { sp<ABuffer> imageData; if (info->mData->format()->findBuffer("image-data", &imageData)) { info->mData->meta()->setBuffer("image-data", imageData); } int32_t left, top, right, bottom; if (info->mData->format()->findRect("crop", &left, &top, &right, &bottom)) { info->mData->meta()->setRect("crop-rect", left, top, right, bottom); } } } // 返回索引 return index; }
onInputBufferAvailable會一次性將隊列中的inputBuffer index都通知給上層,上層拿到索引就可以通過getInputBuffer獲取buffer、填充buffer,最后調用queueInputBuffer將buffer寫給decoder,接下來看看是如何寫入的。
status_t MediaCodec::queueInputBuffer( size_t index, size_t offset, size_t size, int64_t presentationTimeUs, uint32_t flags, AString *errorDetailMsg) { if (errorDetailMsg != NULL) { errorDetailMsg->clear(); } sp<AMessage> msg = new AMessage(kWhatQueueInputBuffer, this); msg->setSize("index", index); msg->setSize("offset", offset); msg->setSize("size", size); msg->setInt64("timeUs", presentationTimeUs); msg->setInt32("flags", flags); msg->setPointer("errorDetailMsg", errorDetailMsg); sp<AMessage> response; return PostAndAwaitResponse(msg, &response); }
queueInputBuffer將index,pts,flag、size等信息打包發送出去,到onMessageReceive中做實際處理
case kWhatQueueInputBuffer: { sp<AReplyToken> replyID; CHECK(msg->senderAwaitsResponse(&replyID)); if (!isExecuting()) { PostReplyWithError(replyID, INVALID_OPERATION); break; } else if (mFlags & kFlagStickyError) { PostReplyWithError(replyID, getStickyError()); break; } status_t err = UNKNOWN_ERROR; // 檢查mLeftOver是否為空,如果不為空則先加入到mLeftover中 if (!mLeftover.empty()) { mLeftover.push_back(msg); size_t index; msg->findSize("index", &index); err = handleLeftover(index); } else { // 或者直接調用onQueueInputBuffer中處理 err = onQueueInputBuffer(msg); } PostReplyWithError(replyID, err); break; }
有兩種處理方式,一是加入到mLeftover隊列中,調用handleLeftover方法處理,還有一種是調用onQueueInputBuffer處理。由於目前還沒接觸到mLeftover,所以先看onQueueInputBuffer是如何處理的。
status_t MediaCodec::onQueueInputBuffer(const sp<AMessage> &msg) { size_t index; size_t offset; size_t size; int64_t timeUs; uint32_t flags; CHECK(msg->findSize("index", &index)); CHECK(msg->findInt64("timeUs", &timeUs)); CHECK(msg->findInt32("flags", (int32_t *)&flags)); std::shared_ptr<C2Buffer> c2Buffer; sp<hardware::HidlMemory> memory; sp<RefBase> obj; // queueCSDbuffer / queueEncryptedBuffer時用到c2buffer / memory if (msg->findObject("c2buffer", &obj)) { CHECK(obj); c2Buffer = static_cast<WrapperObject<std::shared_ptr<C2Buffer>> *>(obj.get())->value; } else if (msg->findObject("memory", &obj)) { CHECK(obj); memory = static_cast<WrapperObject<sp<hardware::HidlMemory>> *>(obj.get())->value; CHECK(msg->findSize("offset", &offset)); } else { CHECK(msg->findSize("offset", &offset)); } const CryptoPlugin::SubSample *subSamples; size_t numSubSamples; const uint8_t *key; const uint8_t *iv; CryptoPlugin::Mode mode = CryptoPlugin::kMode_Unencrypted; CryptoPlugin::SubSample ss; CryptoPlugin::Pattern pattern; if (msg->findSize("size", &size)) { if (hasCryptoOrDescrambler()) { ss.mNumBytesOfClearData = size; ss.mNumBytesOfEncryptedData = 0; subSamples = &ss; numSubSamples = 1; key = NULL; iv = NULL; pattern.mEncryptBlocks = 0; pattern.mSkipBlocks = 0; } } else if (!c2Buffer) { if (!hasCryptoOrDescrambler()) { return -EINVAL; } CHECK(msg->findPointer("subSamples", (void **)&subSamples)); CHECK(msg->findSize("numSubSamples", &numSubSamples)); CHECK(msg->findPointer("key", (void **)&key)); CHECK(msg->findPointer("iv", (void **)&iv)); CHECK(msg->findInt32("encryptBlocks", (int32_t *)&pattern.mEncryptBlocks)); CHECK(msg->findInt32("skipBlocks", (int32_t *)&pattern.mSkipBlocks)); int32_t tmp; CHECK(msg->findInt32("mode", &tmp)); mode = (CryptoPlugin::Mode)tmp; size = 0; for (size_t i = 0; i < numSubSamples; ++i) { size += subSamples[i].mNumBytesOfClearData; size += subSamples[i].mNumBytesOfEncryptedData; } } if (index >= mPortBuffers[kPortIndexInput].size()) { return -ERANGE; } // 拿到mPortBuffers[kPortIndexInput] 中對應索引的buffer BufferInfo *info = &mPortBuffers[kPortIndexInput][index]; sp<MediaCodecBuffer> buffer = info->mData; if (c2Buffer || memory) { sp<AMessage> tunings; CHECK(msg->findMessage("tunings", &tunings)); onSetParameters(tunings); status_t err = OK; if (c2Buffer) { err = mBufferChannel->attachBuffer(c2Buffer, buffer); } else if (memory) { err = mBufferChannel->attachEncryptedBuffer( memory, (mFlags & kFlagIsSecure), key, iv, mode, pattern, offset, subSamples, numSubSamples, buffer); } else { err = UNKNOWN_ERROR; } if (err == OK && !buffer->asC2Buffer() && c2Buffer && c2Buffer->data().type() == C2BufferData::LINEAR) { C2ConstLinearBlock block{c2Buffer->data().linearBlocks().front()}; if (block.size() > buffer->size()) { C2ConstLinearBlock leftover = block.subBlock( block.offset() + buffer->size(), block.size() - buffer->size()); sp<WrapperObject<std::shared_ptr<C2Buffer>>> obj{ new WrapperObject<std::shared_ptr<C2Buffer>>{ C2Buffer::CreateLinearBuffer(leftover)}}; msg->setObject("c2buffer", obj); mLeftover.push_front(msg); // Not sending EOS if we have leftovers flags &= ~BUFFER_FLAG_EOS; } } offset = buffer->offset(); size = buffer->size(); if (err != OK) { return err; } } if (buffer == nullptr || !info->mOwnedByClient) { return -EACCES; } if (offset + size > buffer->capacity()) { return -EINVAL; } // 將傳入的offset和pts打包進buffer buffer->setRange(offset, size); buffer->meta()->setInt64("timeUs", timeUs); if (flags & BUFFER_FLAG_EOS) { // 如果是eos,也設置buffer中的flag buffer->meta()->setInt32("eos", true); } // 如果是寫入的csd buffer,也拉起對應的flag通知codec if (flags & BUFFER_FLAG_CODECCONFIG) { buffer->meta()->setInt32("csd", true); } // 不太清楚這里的flag有什么用 if (mTunneled) { TunnelPeekState previousState = mTunnelPeekState; switch(mTunnelPeekState){ case TunnelPeekState::kEnabledNoBuffer: buffer->meta()->setInt32("tunnel-first-frame", 1); mTunnelPeekState = TunnelPeekState::kEnabledQueued; break; case TunnelPeekState::kDisabledNoBuffer: buffer->meta()->setInt32("tunnel-first-frame", 1); mTunnelPeekState = TunnelPeekState::kDisabledQueued; break; default: break; } } status_t err = OK; if (hasCryptoOrDescrambler() && !c2Buffer && !memory) { AString *errorDetailMsg; CHECK(msg->findPointer("errorDetailMsg", (void **)&errorDetailMsg)); // Notify mCrypto of video resolution changes if (mTunneled && mCrypto != NULL) { int32_t width, height; if (mInputFormat->findInt32("width", &width) && mInputFormat->findInt32("height", &height) && width > 0 && height > 0) { if (width != mTunneledInputWidth || height != mTunneledInputHeight) { mTunneledInputWidth = width; mTunneledInputHeight = height; mCrypto->notifyResolution(width, height); } } } // 寫入加密buffer err = mBufferChannel->queueSecureInputBuffer( buffer, (mFlags & kFlagIsSecure), key, iv, mode, pattern, subSamples, numSubSamples, errorDetailMsg); if (err != OK) { mediametrics_setInt32(mMetricsHandle, kCodecQueueSecureInputBufferError, err); ALOGW("Log queueSecureInputBuffer error: %d", err); } } else { // 寫入普通buffer err = mBufferChannel->queueInputBuffer(buffer); if (err != OK) { mediametrics_setInt32(mMetricsHandle, kCodecQueueInputBufferError, err); ALOGW("Log queueInputBuffer error: %d", err); } } if (err == OK) { // synchronization boundary for getBufferAndFormat Mutex::Autolock al(mBufferLock); // 改變BufferInfo的owner info->mOwnedByClient = false; info->mData.clear(); // 記錄下寫入buffer的pts以及對應的寫入時間 statsBufferSent(timeUs, buffer); } return err; }
onQueueInputBuffer方法很長,主要是有很多不同的方法會調用它,比如這里的queueInputBuffer、queueCSDBuffer以及queueSecureBuffer等,里面會做很多判斷,最終調用了codecbase的queueInputBuffer和queueSecureBuffer。
到這里一個完整的inputBuffer處理過程就結束了。
kPortIndexOutput
BufferChannel調用回調方法onOutputBufferAvailable去
void BufferCallback::onOutputBufferAvailable( size_t index, const sp<MediaCodecBuffer> &buffer) { sp<AMessage> notify(mNotify->dup()); notify->setInt32("what", kWhatDrainThisBuffer); notify->setSize("index", index); notify->setObject("buffer", buffer); notify->post(); }
接下來還是到onMessageReceived中處理
case kWhatDrainThisBuffer: { // 把output buffer加入到隊列當中 /* size_t index = */updateBuffers(kPortIndexOutput, msg); if (mState == FLUSHING || mState == STOPPING || mState == RELEASING) { returnBuffersToCodecOnPort(kPortIndexOutput); break; } if (mFlags & kFlagIsAsync) { sp<RefBase> obj; CHECK(msg->findObject("buffer", &obj)); sp<MediaCodecBuffer> buffer = static_cast<MediaCodecBuffer *>(obj.get()); // In asynchronous mode, output format change is processed immediately. // 如果outputformat發生變化,則調用方法更新 handleOutputFormatChangeIfNeeded(buffer); // 異步通知上層處理outputbuffer onOutputBufferAvailable(); } else if (mFlags & kFlagDequeueOutputPending) { CHECK(handleDequeueOutputBuffer(mDequeueOutputReplyID)); ++mDequeueOutputTimeoutGeneration; mFlags &= ~kFlagDequeueOutputPending; mDequeueOutputReplyID = 0; } else { postActivityNotificationIfPossible(); } break; }
還是熟悉的過程:
1、將outputbuffer以及其索引加入到隊列當中
2、如果outputformat發生變化則更新
3、調用onOutputBufferAvailable異步通知上層處理
void MediaCodec::onOutputBufferAvailable() { int32_t index; while ((index = dequeuePortBuffer(kPortIndexOutput)) >= 0) { const sp<MediaCodecBuffer> &buffer = mPortBuffers[kPortIndexOutput][index].mData; sp<AMessage> msg = mCallback->dup(); msg->setInt32("callbackID", CB_OUTPUT_AVAILABLE); msg->setInt32("index", index); msg->setSize("offset", buffer->offset()); msg->setSize("size", buffer->size()); int64_t timeUs; CHECK(buffer->meta()->findInt64("timeUs", &timeUs)); msg->setInt64("timeUs", timeUs); int32_t flags; CHECK(buffer->meta()->findInt32("flags", &flags)); msg->setInt32("flags", flags); // 記錄outputbuffer送給上層的時間及對應的pts statsBufferReceived(timeUs, buffer); msg->post(); } }
上層拿到outputbuffer之后,做完AVSync會確定渲染還是丟棄,調用renderOutputBufferAndRelease 和 releaseOutputBuffer
status_t MediaCodec::renderOutputBufferAndRelease(size_t index, int64_t timestampNs) { sp<AMessage> msg = new AMessage(kWhatReleaseOutputBuffer, this); msg->setSize("index", index); msg->setInt32("render", true); msg->setInt64("timestampNs", timestampNs); sp<AMessage> response; return PostAndAwaitResponse(msg, &response); }
case kWhatReleaseOutputBuffer: { sp<AReplyToken> replyID; CHECK(msg->senderAwaitsResponse(&replyID)); if (!isExecuting()) { PostReplyWithError(replyID, INVALID_OPERATION); break; } else if (mFlags & kFlagStickyError) { PostReplyWithError(replyID, getStickyError()); break; } status_t err = onReleaseOutputBuffer(msg); PostReplyWithError(replyID, err); break; }
status_t MediaCodec::onReleaseOutputBuffer(const sp<AMessage> &msg) { size_t index; CHECK(msg->findSize("index", &index)); int32_t render; if (!msg->findInt32("render", &render)) { render = 0; } if (!isExecuting()) { return -EINVAL; } if (index >= mPortBuffers[kPortIndexOutput].size()) { return -ERANGE; } BufferInfo *info = &mPortBuffers[kPortIndexOutput][index]; if (info->mData == nullptr || !info->mOwnedByClient) { return -EACCES; } // synchronization boundary for getBufferAndFormat sp<MediaCodecBuffer> buffer; { Mutex::Autolock al(mBufferLock); info->mOwnedByClient = false; buffer = info->mData; info->mData.clear(); } if (render && buffer->size() != 0) { int64_t mediaTimeUs = -1; buffer->meta()->findInt64("timeUs", &mediaTimeUs); int64_t renderTimeNs = 0; if (!msg->findInt64("timestampNs", &renderTimeNs)) { // use media timestamp if client did not request a specific render timestamp ALOGV("using buffer PTS of %lld", (long long)mediaTimeUs); renderTimeNs = mediaTimeUs * 1000; } if (mSoftRenderer != NULL) { std::list<FrameRenderTracker::Info> doneFrames = mSoftRenderer->render( buffer->data(), buffer->size(), mediaTimeUs, renderTimeNs, mPortBuffers[kPortIndexOutput].size(), buffer->format()); // if we are running, notify rendered frames if (!doneFrames.empty() && mState == STARTED && mOnFrameRenderedNotification != NULL) { sp<AMessage> notify = mOnFrameRenderedNotification->dup(); sp<AMessage> data = new AMessage; if (CreateFramesRenderedMessage(doneFrames, data)) { notify->setMessage("data", data); notify->post(); } } } status_t err = mBufferChannel->renderOutputBuffer(buffer, renderTimeNs); if (err == NO_INIT) { ALOGE("rendering to non-initilized(obsolete) surface"); return err; } if (err != OK) { ALOGI("rendring output error %d", err); } } else { mBufferChannel->discardBuffer(buffer); } return OK; }
可以看到,最后是調用BufferChannel的renderOutputBuffer來渲染。
到這里一個output buffer的處理就完成了。
7、flush
case kWhatFlush: { if (!isExecuting()) { PostReplyWithError(msg, INVALID_OPERATION); break; } else if (mFlags & kFlagStickyError) { PostReplyWithError(msg, getStickyError()); break; } if (mReplyID) { mDeferredMessages.push_back(msg); break; } sp<AReplyToken> replyID; CHECK(msg->senderAwaitsResponse(&replyID)); mReplyID = replyID; // TODO: skip flushing if already FLUSHED setState(FLUSHING); // 調用codecbase的signalFlush mCodec->signalFlush(); // 將所有的buffer丟棄 returnBuffersToCodec(); TunnelPeekState previousState = mTunnelPeekState; mTunnelPeekState = TunnelPeekState::kEnabledNoBuffer; ALOGV("TunnelPeekState: %s -> %s", asString(previousState), asString(TunnelPeekState::kEnabledNoBuffer)); break; }
flush方法會先將狀態置為FLUSHING,然后調用codecbase的signalFlush方法(等待調用結束后應該會有回調置為FLUSHED),將所有的buffer丟棄,丟棄buffer分為兩部分:
一是調用BufferChannel的discardbuffer方法,將buffer還給decoder,二是清除mediacode持有的可用索引。
mediacodec並沒有pause和resume方法!pause和resume需要player來實現。基本的運行原理大概都了解清楚了,其他的方法暫時就不看了。