在分析AudioTrack的時候,第一步會new AudioTrack,並調用他的set方法。在set方法的最后調用了createTrack_l創建音軌。我們現在來分析createTrack_l的流程。
在分析createTrack_l之前,我們先來了解Android音頻流的從PCM到輸出的路線。首先,我們的PCM音頻數據一般會在用戶端,而混音會在AudioFlinger端,因此需要把PCM數據傳送給AudioFlinger,因此需要開辟出一塊內存用於數據傳送;數據到了AudioFlinger之后,可以給PCM數據調節音量,增加音效等(即混音),因此還需要一塊內存用於音效處理,這塊buffer在getOutput內已經開辟;混音完成后即可把PCM數據輸出給音頻設備進行播放。
creatTrack_l的任務主要是創建音軌,即開辟出數據傳送的內存。具體實現是創建出一塊share buffer,這塊buffer既可以被AudioTrack寫入,又可以被AudioFlinger讀取進行混音。
createTrack總體可以分為三個步驟:
- 從AudioFlinger獲取創建sharebuffer所需的參數,如latency,framecount,sampleRate;然后與傳入的參數(framecount,sampleRate)做對比,目的是計算出正確的framecount
- 從AudioFlinger創建buffer,並創建對sharebuffer進行控制的對象AudioTrackServerProxy
- 創建可以對sharebuffer進行控制的對象AudioTrackClientProxy
1. 獲取正確framecount
AudioTrack按照如下方式獲取framecount
status_t AudioTrack::createTrack_l( status = AudioSystem::getLatency(output, streamType, &afLatency); status = AudioSystem::getFrameCount(output, streamType, &afFrameCount); status = AudioSystem::getSamplingRate(output, streamType, &afSampleRate); if (!audio_is_linear_pcm(format)) { if (sharedBuffer != 0) { // Same comment as below about ignoring frameCount parameter for set() frameCount = sharedBuffer->size(); } else if (frameCount == 0) { frameCount = afFrameCount; } if (mNotificationFramesAct != frameCount) { mNotificationFramesAct = frameCount; } } else if (sharedBuffer != 0) { // user share buffer,we donot neet to allocate // Ensure that buffer alignment matches channel count // 8-bit data in shared memory is not currently supported by AudioFlinger size_t alignment = /* format == AUDIO_FORMAT_PCM_8_BIT ? 1 : */ 2; if (mChannelCount > 1) { alignment <<= 1; } if (((size_t)sharedBuffer->pointer() & (alignment - 1)) != 0) { return BAD_VALUE; } frameCount = sharedBuffer->size()/mChannelCount/sizeof(int16_t); } else if (!(flags & AUDIO_OUTPUT_FLAG_FAST)) { // non-fast uint32_t minBufCount = 2; if (minBufCount <= nBuffering) { minBufCount = nBuffering; } // calculate buffer size by param from AudioFlinger size_t minFrameCount = (afFrameCount*sampleRate*minBufCount)/afSampleRate; if (frameCount == 0) { frameCount = minFrameCount; } else if (frameCount < minFrameCount) { frameCount = minFrameCount; } } else { // For fast tracks, the frame count calculations and checks are done by server }
先看一下AudioTrack計算framecount時的式子:
minFrameCount = (afFrameCount*sampleRate*minBufCount)/afSampleRate;
afFrameCount與afSampleRate都是從AudioFlinger得到的兩個參數。
- afFrameCount代表MixerBuffer的大小,單位為Frame。Frame的定義為PCM音頻數據的一個“采樣 * 音軌個數”。
- afSampleRate代表MixerBuffer的默認采樣率,即一秒內包含的Frame數目。
因此有如下公式:
$BufferSeconds = \frac{afFrameCount}{afSampleRate} = \frac{frameCount}{sampleRate}$
計算出buffer中包含多少秒音頻數據。
下面是一個buffer實例,雖然sample rate一般都會是44100,但是為了方便畫圖,下面以5代替
AudioFlinger獲取AfFrameCount的過程如下:
//AudioFlinger.cpp size_t AudioFlinger::frameCount(audio_io_handle_t output) const { return thread->frameCount(); } //Thread.h virtual size_t frameCount() const { return mNormalFrameCount; } //Thread.cpp void AudioFlinger::PlaybackThread::readOutputParameters() { mFrameCount = mOutput->stream->common.get_buffer_size(&mOutput->stream->common) / mFrameSize; mNormalFrameCount = multiplier * mFrameCount; } //Audio_hw.c #define SHORT_PERIOD_SIZE 512 static size_t out_get_buffer_size_low_latency(const struct audio_stream *stream) { struct tuna_stream_out *out = (struct tuna_stream_out *)stream; /* take resampling into account and return the closest majoring multiple of 16 frames, as audioflinger expects audio buffers to be a multiple of 16 frames. Note: we use the default rate here from pcm_config_tones.rate. */ size_t size = (SHORT_PERIOD_SIZE * DEFAULT_OUT_SAMPLING_RATE) / pcm_config_tones.rate; size = ((size + 15) / 16) * 16; return size * audio_stream_frame_size((struct audio_stream *)stream); }
獲取與AfSampleRate的過程如下:
//AudioFlinger.cpp uint32_t AudioFlinger::sampleRate(audio_io_handle_t output) const { return thread->sampleRate(); } //Thread.h uint32_t sampleRate() const { return mSampleRate; } //Thread.cpp where sample rate be initialized void AudioFlinger::PlaybackThread::readOutputParameters() { mSampleRate = mOutput->stream->common.get_sample_rate(&mOutput->stream->common); } //Audio_hw.c #define DEFAULT_OUT_SAMPLING_RATE 44100 // 48000 is possible but interacts poorly with HDMI static uint32_t out_get_sample_rate(const struct audio_stream *stream) { return DEFAULT_OUT_SAMPLING_RATE; }
而minFrameCount則包含了minBufferCount,即share buffer有多少個Mixer Buffer的大小
// The client's AudioTrack buffer is divided into n parts for purpose of wakeup by server, where // n = 1 fast track; nBuffering is ignored // n = 2 normal track, no sample rate conversion // n = 3 normal track, with sample rate conversion // (pessimistic; some non-1:1 conversion ratios don't actually need triple-buffering) // n > 3 very high latency or very small notification interval; nBuffering is ignored
- 如果在調用set方法的的時候,指定了flag為fast track,則表明希望Audio Buffer內的數據被盡快處理,因此Buffer會被創建得比較小,期采用單buffer
- 一般情況下,即輸入PCM音頻數據的采樣率與輸出音頻數據的采樣率一樣的話,則不用進行采樣率轉換,采用雙buffer
- 在需要采樣率轉換的情況,則采用三buffer
- 在碰到高延遲的情況,(如硬件不能及時輸出PCM音頻),則需要更大的buffer對數據進行緩存
2. AudioFlinger創建share buffer
AudioTrack是通過調用AudioFlinger的createTrack的方法來實現創建share buffer。createTrack的步驟如下:
- 獲取輸出線程PlaybackThread
- 調用獲取到的PlaybackThread的createTrack_l函數來創建Track對象,在Track對象內部會創建share buffer
- 創建Track的binder對象TrackHandle,Track由於需要通過binder返回給AudioTrack,因此是個binder對象,該對象會包含share buffer的信息
sp<IAudioTrack> AudioFlinger::createTrack(...) { PlaybackThread *thread = checkPlaybackThread_l(output); track = thread->createTrack_l(client, streamType, sampleRate, format, channelMask, frameCount, sharedBuffer, lSessionId, flags, tid, clientUid, &lStatus); trackHandle = new TrackHandle(track); return trackHandle; }
①. 獲取輸出線程PlaybackThread
還記得getOutput時所創建的PlaybackThread嗎?PlaybackThread會在創建MixerThread時一同被創建。在getOutput內,我們把該thread放進了mPlaybackThreads進行維護。現在我們有需要把它取出來。
AudioFlinger::PlaybackThread *AudioFlinger::checkPlaybackThread_l(audio_io_handle_t output) const { return mPlaybackThreads.valueFor(output).get(); }
②. 調用PlaybackThread的createTrack_l
在createTrack_l內調用了new Track來實現創建share buffer
sp<AudioFlinger::PlaybackThread::Track> AudioFlinger::PlaybackThread::createTrack_l(...) { track = new Track(this, client, streamType, sampleRate, format, channelMask, frameCount, sharedBuffer, sessionId, uid, *flags); }
Track的父類是TrackBase,因此會先構建TrackBase對象
// TrackBase constructor must be called with AudioFlinger::mLock held AudioFlinger::ThreadBase::TrackBase::TrackBase(...) { // buffer header size_t size = sizeof(audio_track_cblk_t); // buffer content size size_t bufferSize = (sharedBuffer == 0 ? roundup(frameCount) : frameCount) * mFrameSize; if (sharedBuffer == 0) { size += bufferSize; } if (client != 0) { //allocate share buffer mCblkMemory = client->heap()->allocate(size); if (mCblkMemory != 0) { mCblk = static_cast<audio_track_cblk_t *>(mCblkMemory->pointer()); // can't assume mCblk != NULL } else { ALOGE("not enough memory for AudioTrack size=%u", size); client->heap()->dump("AudioTrack"); return; } } else { // this syntax avoids calling the audio_track_cblk_t constructor twice mCblk = (audio_track_cblk_t *) new uint8_t[size]; // assume mCblk != NULL } // construct the shared structure in-place. if (mCblk != NULL) { // this is header above buffer content new(mCblk) audio_track_cblk_t(); // clear all buffers mCblk->frameCount_ = frameCount; if (sharedBuffer == 0) { mBuffer = (char*)mCblk + sizeof(audio_track_cblk_t); memset(mBuffer, 0, bufferSize); } else { mBuffer = sharedBuffer->pointer(); } } }
其中,創建出來的buffer需要包含存放Audio PCM data的share buffer,還需要包含audio_track_cblk_t這個buffer頭。調用heap->allocate這個函數來創建share buffer,buffer頭部調用new(mCblk) audio_track_cblk_t;這種定位new的方式來創建。buffer的結構如下:
new Track在構造函數體內,會創建AudioTrackServerProxy,這個對象會被用作AudioFlinger這邊的buffer操作,由於share buffer是跨線程,甚至是跨進程的,而Proxy可以保證buffer訪問的線程安全。
AudioFlinger::PlaybackThread::Track::Track( { mAudioTrackServerProxy = new AudioTrackServerProxy(mCblk, mBuffer, frameCount,mFrameSize); mServerProxy = mAudioTrackServerProxy; }
③. 創建TrackHandle
由於share buffer不止會在AudioFlinger這端被讀取,還會在AudioTrack這端被寫入,因此創建出來的Track需要被傳送回AudioTrack。而在binder間傳送對象只有binder對象,因此需要構建binder對象TrackHandle,返回給AudioTrack。
sp<IAudioTrack> AudioFlinger::createTrack(...) { trackHandle = new TrackHandle(track); } // TrackHandle is a BnBinder object class TrackHandle : public android::BnAudioTrack { ... }
至此,createTrack_l在AudioFlinger這端的工作基本完成了。
3. 創建ClientProxy
有ServerProxy,相應地也會有ClientProxy,AudioTrackClientProxy就是在AudioTrack端可以對Track(share buffer)進行操作的類。
從AudioFlinger的createTrack返回TrackHandle后,就能通過TrackHandle的相關函數獲得Track的信息,如buffer的起始地址等。用這些信息構造AudioTrackClientProxy.
status_t AudioTrack::createTrack_l(...) { sp<IAudioTrack> track = audioFlinger->createTrack(...); sp<IMemory> iMem = track->getCblk(); audio_track_cblk_t* cblk = static_cast<audio_track_cblk_t*>(iMem->pointer()); mProxy = new AudioTrackClientProxy(cblk, buffers, frameCount, mFrameSizeAF); }
4. 總結
最后,總結一下各個對象間的關系。
AudioFlinger:
- 首先會在AudioFlinger端創建Track,Track內包含buffer的創建及buffer指針的維護
- Track內部有一個AudioTrackServerProxy的成員對象,用於進行buffer的相關操作
- TrackHandle是Track對象的Binder實例,用於通過Binder返回給AudioTrack
AudioTrack:
- IAudioTrack是TrackHandle在AudioTrack端相對應的類,該類用於提供buffer的相關信息給AudioTrackClientProxy
- AudioTrackClientProxy獲得buffer的信息后,即可以對buffer進行相關操作
createTrack_l的總體流程如下: