[時間:2016-10] [狀態:Open]
[關鍵詞:android,nuplayer,開源播放器,播放框架]
0 引言
差不多一個月了,繼續分析AOSP的播放框架的源碼。這次我們需要深入分析的是NuPlayer類,相比於NuPlayerDriver的接口功能,NuPlayer繼承自AHandler類,是AOSP播放框架中連接Source、Decoder、Render的紐帶。
我希望讀完本文大家可以對NuPlayer的源碼結構有一定了解。
本文是我的NuPlayer播放框架的第三篇。
1 主要接口和核心的類成員
NuPlayer類被NuPlayerDriver直接調用,其主要接口如下:
// code frome NuPlayer.h (~/frameworks/av/media/libmediaplayerservice/nuplayer/)
struct NuPlayer : public AHandler {
NuPlayer(pid_t pid);
void setUID(uid_t uid);
void setDriver(const wp<NuPlayerDriver> &driver);
void setDataSourceAsync(...);
void prepareAsync();
void setVideoSurfaceTextureAsync(const sp<IGraphicBufferProducer> &bufferProducer);
void start();
void pause();
// Will notify the driver through "notifyResetComplete" once finished.
void resetAsync();
// Will notify the driver through "notifySeekComplete" once finished
// and needNotify is true.
void seekToAsync(int64_t seekTimeUs, bool needNotify = false);
status_t setVideoScalingMode(int32_t mode);
status_t getTrackInfo(Parcel* reply) const;
status_t getSelectedTrack(int32_t type, Parcel* reply) const;
status_t selectTrack(size_t trackIndex, bool select, int64_t timeUs);
status_t getCurrentPosition(int64_t *mediaUs);
sp<MetaData> getFileMeta();
float getFrameRate();
protected:
virtual ~NuPlayer();
virtual void onMessageReceived(const sp<AMessage> &msg);
}
接口分類下,無外乎幾個分類:
- 用於初始化的(比如構造函數、setDriver/setDataSourceAsync/prepareAsync/setVideoSurfaceTextureAsync)
- 用於銷毀的(比如析構函數、resetAsync)
- 用於播放控制的(比如start/pause/seekToAsync)
- 用於狀態獲取的(比如getCurrentPosition/getFileMeta)
下面是主要的類成員部分
wp<NuPlayerDriver> mDriver; // 接口調用方
sp<Source> mSource; // 相當於FFmpeg中的demuxer
sp<Surface> mSurface; // 顯示用的Surface
sp<DecoderBase> mVideoDecoder; // 視頻解碼器
sp<DecoderBase> mAudioDecoder; // 音頻解碼器
sp<CCDecoder> mCCDecoder;
sp<Renderer> mRenderer; // 渲染器
sp<ALooper> mRendererLooper;
2 setDataSourceAsync實現分析
這個函數有多重不同的重載形式,如下:
void setDataSourceAsync(const sp<IStreamSource> &source);
void setDataSourceAsync(const sp<IMediaHTTPService> &httpService, const char *url,
const KeyedVector<String8, String8> *headers);
void setDataSourceAsync(int fd, int64_t offset, int64_t length);
void setDataSourceAsync(const sp<DataSource> &source);
需要根據實際情況選擇,這里以第三個接口為例,說明下多本地媒體文件是如何處理的。
下面是這個函數的實現代碼:
void NuPlayer::setDataSourceAsync(int fd, int64_t offset, int64_t length) {
sp<AMessage> msg = new AMessage(kWhatSetDataSource, this);
sp<AMessage> notify = new AMessage(kWhatSourceNotify, this);
// 創建對象用於讀取本地文件
sp<GenericSource> source =
new GenericSource(notify, mUIDValid, mUID);
// 實際干活的的代碼
status_t err = source->setDataSource(fd, offset, length);
if (err != OK) {
ALOGE("Failed to set data source!");
source = NULL;
}
msg->setObject("source", source);
msg->post();
}
看實現很簡單,創建GenericSource對象,並調用其setDataSource接口,然后發送kWhatSetDataSource消息。
我們看看如何處理然后發送kWhatSetDataSource消息呢?代碼如下:
case kWhatSetDataSource:
{
CHECK(mSource == NULL);
status_t err = OK;
sp<RefBase> obj;
CHECK(msg->findObject("source", &obj));
if (obj != NULL) {
Mutex::Autolock autoLock(mSourceLock);
mSource = static_cast<Source *>(obj.get());
} else {
err = UNKNOWN_ERROR;
}
// 通知Driver函數調用完成
CHECK(mDriver != NULL);
sp<NuPlayerDriver> driver = mDriver.promote();
if (driver != NULL) {
driver->notifySetDataSourceCompleted(err);
}
break;
}
看到這里發現,其實沒做什么就是直接通知NuPlayerDriver。我們還注意到這里構建了一個特殊消息(AMessage)notify,這個消息用於在Source和NuPlayer直接傳遞。下面這是消息循環中的處理函數:
case kWhatSourceNotify:
{
onSourceNotify(msg);
break;
}
在后續討論Source的時候詳細說明這個消息通知的意義。
3 prepareAsync
這個函數實現的功能對應於MediaPlayerBase::prepare/prepareAsync接口,實現異步的prepare功能,一般就是做一些額外的初始化工作。那么直接看一下實現:
void NuPlayer::prepareAsync() {
(new AMessage(kWhatPrepare, this))->post();
}
代碼就是發了一個kWhatPrepare的消息。接下來是如何處理這個消息。
case kWhatPrepare:
{
mSource->prepareAsync();
break;
}
最終還是調用了Source::prepareAsync接口。后面會解釋其功能。(這里面可能會解析下碼流,讀取音頻、視頻、字幕流信息,讀取時長、元數據等)。
4 setVideoSurfaceTextureAsync
調用這個接口主要為了設置視頻渲染窗口。其實現相對簡單,創建一個Surface,然后發送異步的kWhatSetVideoSurface消息。代碼如下:
void NuPlayer::setVideoSurfaceTextureAsync( const sp<IGraphicBufferProducer> &bufferProducer) {
sp<AMessage> msg = new AMessage(kWhatSetVideoSurface, this);
if (bufferProducer == NULL) {
msg->setObject("surface", NULL);
} else {
msg->setObject("surface", new Surface(bufferProducer, true /* controlledByApp */));
}
msg->post();
}
那么看看如何處理kWhatSetVideoSurface消息呢?
case kWhatSetVideoSurface: {
sp<RefBase> obj;
CHECK(msg->findObject("surface", &obj));
sp<Surface> surface = static_cast<Surface *>(obj.get());
// Need to check mStarted before calling mSource->getFormat because NuPlayer might
// be in preparing state and it could take long time.
// When mStarted is true, mSource must have been set.
if (mSource == NULL || !mStarted || mSource->getFormat(false /* audio */) == NULL
// NOTE: mVideoDecoder's mSurface is always non-null
|| (mVideoDecoder != NULL && mVideoDecoder->setVideoSurface(surface) == OK)) {
performSetSurface(surface); // 通知NuPlayerDriver設置完成
break;
}
// 清空音頻、視頻緩沖
mDeferredActions.push_back(
new FlushDecoderAction(FLUSH_CMD_FLUSH /* audio */,FLUSH_CMD_SHUTDOWN /* video */));
// 最終調用NuPlayer::performSetSurface接口
mDeferredActions.push_back(new SetSurfaceAction(surface));
if (obj != NULL || mAudioDecoder != NULL) {
if (mStarted) {
// Issue a seek to refresh the video screen only if started otherwise
// the extractor may not yet be started and will assert.
// If the video decoder is not set (perhaps audio only in this case)
// do not perform a seek as it is not needed.
int64_t currentPositionUs = 0;
if (getCurrentPosition(¤tPositionUs) == OK) {
mDeferredActions.push_back(
new SeekAction(currentPositionUs));
}
}
// 對於新的surface設置,重置下解碼器
mDeferredActions.push_back(new SimpleAction(&NuPlayer::performScanSources));
}
// After a flush without shutdown, decoder is paused.
// Don't resume it until source seek is done, otherwise it could
// start pulling stale data too soon.
mDeferredActions.push_back(
new ResumeDecoderAction(false /* needNotify */));
// 把上面mDeferredActions中緩存的所有Action處理下,並清空
processDeferredActions();
break;
}
這里的代碼相對復雜點,涉及到很多,其實主要是為了設置Surface之后,可以正常解碼顯示,因為某些情況下解碼器初始化需要依賴於具體的Surface。當然,里邊還涉及到NuPlayer狀態及初始化判斷。
5 start/pause
start函數實現很簡單,實際就發送了kWhatStart消息。
void NuPlayer::start() {
(new AMessage(kWhatStart, this))->post();
}
在消息處理函數中的處理如下:
case kWhatStart:
{
if (mStarted) {
// do not resume yet if the source is still buffering
if (!mPausedForBuffering) {
onResume();
}
} else {
onStart();
}
mPausedByClient = false;
break;
}
直接調用了OnStart/OnResume函數。
pause函數實現類似,只是發送的是kWhatPause消息。在消息處理函數中的代碼如下:
case kWhatPause:
{
onPause();
mPausedByClient = true;
break;
}
直接調用的onPause函數。下面單獨分析下這三個函數。先從簡單的函數開始OnPause/onResume
NuPlayer::onPause
這個函數實現暫停功能,總體來說就是把Source和Render暫停就可以了,代碼如下:
void NuPlayer::onPause() {
if (mPaused) {
return;
}
mPaused = true;
if (mSource != NULL) {
mSource->pause();
}
if (mRenderer != NULL) {
mRenderer->pause();
}
}
NuPlayer::onResume
這個函數實現恢復功能,代碼邏輯跟onPause差不多,把Source和Render恢復,還可能涉及其它操作。代碼如下:
void NuPlayer::onResume() {
if (!mPaused || mResetting) {
return;
}
mPaused = false;
if (mSource != NULL) {
mSource->resume();
}
// |mAudioDecoder| may have been released due to the pause timeout, so re-create it if
// needed.
if (audioDecoderStillNeeded() && mAudioDecoder == NULL) {
instantiateDecoder(true /* audio */, &mAudioDecoder);
}
if (mRenderer != NULL) {
mRenderer->resume();
}
}
NuPlayer::onStart
這個接口實現啟動的操作,相對復雜點,需要初始化解碼器、初始化Render、設置Source狀態,並將三者關聯起來。代碼如下:
void NuPlayer::onStart(int64_t startPositionUs) {
if (!mSourceStarted) {
mSourceStarted = true;
mSource->start(); // 設置Source狀態
}
// ... (省略部分代碼)
sp<AMessage> notify = new AMessage(kWhatRendererNotify, this);
++mRendererGeneration; // 創建Render和RenderLooper,屬性設置、與解碼器關聯
notify->setInt32("generation", mRendererGeneration);
mRenderer = new Renderer(mAudioSink, notify, flags);
mRendererLooper = new ALooper;
mRendererLooper->setName("NuPlayerRenderer");
mRendererLooper->start(false, false, ANDROID_PRIORITY_AUDIO);
mRendererLooper->registerHandler(mRenderer);
status_t err = mRenderer->setPlaybackSettings(mPlaybackSettings);
float rate = getFrameRate();
if (rate > 0) {
mRenderer->setVideoFrameRate(rate);
}
if (mVideoDecoder != NULL) {
mVideoDecoder->setRenderer(mRenderer);
}
if (mAudioDecoder != NULL) {
mAudioDecoder->setRenderer(mRenderer);
}
postScanSources();
}
上面代碼中沒有解碼器的初始化,那只能繼續看看postScanSources代碼了。看實現發現就是發送了kWhatScanSources消息。那么消息循環里邊是怎么處理的呢?
case kWhatScanSources:
{
int32_t generation;
CHECK(msg->findInt32("generation", &generation));
if (generation != mScanSourcesGeneration) {
// Drop obsolete msg.
break;
}
mScanSourcesPending = false;
bool mHadAnySourcesBefore = (mAudioDecoder != NULL) || (mVideoDecoder != NULL);
bool rescan = false;
// initialize video before audio because successful initialization of
// video may change deep buffer mode of audio.
if (mSurface != NULL) { // 初始化視頻解碼器
if (instantiateDecoder(false, &mVideoDecoder) == -EWOULDBLOCK) {
rescan = true;
}
}
// Don't try to re-open audio sink if there's an existing decoder.
if (mAudioSink != NULL && mAudioDecoder == NULL) { // 初始化音頻解碼器
if (instantiateDecoder(true, &mAudioDecoder) == -EWOULDBLOCK) {
rescan = true;
}
}
if (!mHadAnySourcesBefore && (mAudioDecoder != NULL || mVideoDecoder != NULL)) {
// This is the first time we've found anything playable.
// 設置定期查詢時長
if (mSourceFlags & Source::FLAG_DYNAMIC_DURATION) {
schedulePollDuration();
}
}
status_t err; // 一些異常處理邏輯
if ((err = mSource->feedMoreTSData()) != OK) {
if (mAudioDecoder == NULL && mVideoDecoder == NULL) {
// We're not currently decoding anything (no audio or
// video tracks found) and we just ran out of input data.
if (err == ERROR_END_OF_STREAM) {
notifyListener(MEDIA_PLAYBACK_COMPLETE, 0, 0);
} else {
notifyListener(MEDIA_ERROR, MEDIA_ERROR_UNKNOWN, err);
}
}
break;
}
// 如果需要的話,重新掃描Source
if (rescan) {
msg->post(100000ll);
mScanSourcesPending = true;
}
break;
}
6 seekToAsync
這個函數完成seek操作,其實現比較簡單直接發送kWhatSeek消息,代碼如下:
void NuPlayer::seekToAsync(int64_t seekTimeUs, bool needNotify) {
sp<AMessage> msg = new AMessage(kWhatSeek, this);
msg->setInt64("seekTimeUs", seekTimeUs);
msg->setInt32("needNotify", needNotify);
msg->post();
}
在消息循環里邊的處理代碼如下:
case kWhatSeek:
{
int64_t seekTimeUs;
int32_t needNotify;
if (!mStarted) {
// Seek before the player is started. In order to preview video,
// need to start the player and pause it. This branch is called
// only once if needed. After the player is started, any seek
// operation will go through normal path.
// Audio-only cases are handled separately.
onStart(seekTimeUs);
if (mStarted) {
onPause();
mPausedByClient = true;
}
if (needNotify) {
notifyDriverSeekComplete();
}
break;
}
mDeferredActions.push_back(
new FlushDecoderAction(FLUSH_CMD_FLUSH /* audio */,
FLUSH_CMD_FLUSH /* video */));
// 真正做seek事情的在這里
mDeferredActions.push_back(new SeekAction(seekTimeUs));
// After a flush without shutdown, decoder is paused.
// Don't resume it until source seek is done, otherwise it could
// start pulling stale data too soon.
mDeferredActions.push_back(new ResumeDecoderAction(needNotify));
processDeferredActions();
break;
}
實際代碼中SeekAction最終調用performSeek接口,其實現如下:
void NuPlayer::performSeek(int64_t seekTimeUs) {
if (mSource == NULL) {
// This happens when reset occurs right before the loop mode
// asynchronously seeks to the start of the stream.
LOG_ALWAYS_FATAL_IF(mAudioDecoder != NULL || mVideoDecoder != NULL,
"mSource is NULL and decoders not NULL audio(%p) video(%p)",
mAudioDecoder.get(), mVideoDecoder.get());
return;
}
mPreviousSeekTimeUs = seekTimeUs;
mSource->seekTo(seekTimeUs); // 直接調用Source對應接口
++mTimedTextGeneration;
// everything's flushed, continue playback.
}
7 resetAsync
重置函數實現邏輯相對簡單,直接重置下,代碼如下:
void NuPlayer::resetAsync() {
sp<Source> source;
{
Mutex::Autolock autoLock(mSourceLock);
source = mSource;
}
if (source != NULL) {
// During a reset, the data source might be unresponsive already, we need to
// disconnect explicitly so that reads exit promptly.
// We can't queue the disconnect request to the looper, as it might be
// queued behind a stuck read and never gets processed.
// Doing a disconnect outside the looper to allows the pending reads to exit
// (either successfully or with error).
source->disconnect();
}
(new AMessage(kWhatReset, this))->post();
}
消息循環中對於kWhatReset處理如下:
case kWhatReset:
{
mResetting = true;
mDeferredActions.push_back(
new FlushDecoderAction(
FLUSH_CMD_SHUTDOWN /* audio */,
FLUSH_CMD_SHUTDOWN /* video */));
mDeferredActions.push_back(new SimpleAction(&NuPlayer::performReset));
processDeferredActions();
break;
}
上面的SimpleAction是直接調用接口的,其實現如下:
void NuPlayer::performReset() {
cancelPollDuration();
++mScanSourcesGeneration;
mScanSourcesPending = false;
// 銷毀Render
if (mRendererLooper != NULL) {
if (mRenderer != NULL) {
mRendererLooper->unregisterHandler(mRenderer->id());
}
mRendererLooper->stop();
mRendererLooper.clear();
}
mRenderer.clear();
++mRendererGeneration;
// 銷毀Source
if (mSource != NULL) {
mSource->stop();
Mutex::Autolock autoLock(mSourceLock);
mSource.clear();
}
// 通知Reset完成
if (mDriver != NULL) {
sp<NuPlayerDriver> driver = mDriver.promote();
if (driver != NULL) {
driver->notifyResetComplete();
}
}
mStarted = false;
mPrepared = false;
mResetting = false;
mSourceStarted = false;
}
8 getCurrentPosition/getFileMeta
getCurrentPosition用於獲取當前播放位置,直接通過Render的對應接口獲取的。實現代碼如下:
status_t NuPlayer::getCurrentPosition(int64_t *mediaUs) {
sp<Renderer> renderer = mRenderer;
if (renderer == NULL) {
return NO_INIT;
}
return renderer->getCurrentPosition(mediaUs);
}
getFileMeta獲取媒體的元數據信息,直接通過Source的對應接口獲取。實現代碼如下:
sp<MetaData> NuPlayer::getFileMeta() {
return mSource->getFileFormatMeta();
}
9 總結和疑問
到這里,我們已經把NuPlayer主要的函數分析完了,但是問題依舊在。比如下面幾個:
- 不同格式的多媒體文件如何探測並解析的?音視頻數據緩沖區在哪里?(Source)
- 視頻如何顯示的?音頻如何播放的?音視頻同步在哪里?(Renderer)
- 音頻解碼線程、視頻解碼線程在哪里? (DecoderBase)
我想接下來幾個主題就是解決這些疑問的。
當然總結下本文的內容。
主要參考AOSP 7.0的源碼,結合代碼分析了NuPlayer主要對外接口的實現,並簡單總結了各部分的功能。