在上一篇文章《(二)Audio子系統之new AudioRecord()》中已經介紹了Audio系統如何創建AudioRecord對象以及輸入流,並創建了RecordThread線程,接下來,繼續分析AudioRecord方法中的startRecording的實現
函數原型:
public void startRecording() throws IllegalStateException
作用:
開始進行錄制
參數:
無
返回值:
無
異常:
若沒有初始化完成時,拋出IllegalStateException
接下來進入系統分析具體實現
frameworks/base/media/java/android/media/AudioRecord.java
public void startRecording()
throws IllegalStateException {
if (mState != STATE_INITIALIZED) {
throw new IllegalStateException("startRecording() called on an "
+ "uninitialized AudioRecord.");
}
// start recording
synchronized(mRecordingStateLock) {
if (native_start(MediaSyncEvent.SYNC_EVENT_NONE, 0) == SUCCESS) {
handleFullVolumeRec(true);
mRecordingState = RECORDSTATE_RECORDING;
}
}
}
首先判斷是否已經初始化完畢了,在前一篇文章中,mState已經是STATE_INITIALIZED狀態了。所以繼續分析native_start函數
frameworks/base/core/jni/android_media_AudioRecord.cpp
static jint
android_media_AudioRecord_start(JNIEnv *env, jobject thiz, jint event, jint triggerSession)
{
sp<AudioRecord> lpRecorder = getAudioRecord(env, thiz);
if (lpRecorder == NULL ) {
jniThrowException(env, "java/lang/IllegalStateException", NULL);
return (jint) AUDIO_JAVA_ERROR;
}
return nativeToJavaStatus(
lpRecorder->start((AudioSystem::sync_event_t)event, triggerSession));
}
繼續往下:lpRecorder->start
frameworks\av\media\libmedia\AudioRecord.cpp
status_t AudioRecord::start(AudioSystem::sync_event_t event, int triggerSession)
{
AutoMutex lock(mLock);
if (mActive) {
return NO_ERROR;
}
// reset current position as seen by client to 0
mProxy->setEpoch(mProxy->getEpoch() - mProxy->getPosition());
// force refresh of remaining frames by processAudioBuffer() as last
// read before stop could be partial.
mRefreshRemaining = true;
mNewPosition = mProxy->getPosition() + mUpdatePeriod;
int32_t flags = android_atomic_acquire_load(&mCblk->mFlags);
status_t status = NO_ERROR;
if (!(flags & CBLK_INVALID)) {
ALOGV("mAudioRecord->start()");
status = mAudioRecord->start(event, triggerSession);
if (status == DEAD_OBJECT) {
flags |= CBLK_INVALID;
}
}
if (flags & CBLK_INVALID) {
status = restoreRecord_l("start");
}
if (status != NO_ERROR) {
ALOGE("start() status %d", status);
} else {
mActive = true;
sp<AudioRecordThread> t = mAudioRecordThread;
if (t != 0) {
t->resume();
} else {
mPreviousPriority = getpriority(PRIO_PROCESS, 0);
get_sched_policy(0, &mPreviousSchedulingGroup);
androidSetThreadPriority(0, ANDROID_PRIORITY_AUDIO);
}
}
return status;
}
在這個函數中主要的工作如下:
1.重置當前錄音Buffer中的錄音數據寫入的起始位置,錄音Buffer的組成在第一篇文章中已經介紹了;
2.標記mRefreshRemaining為true,從注釋中可以看到,他應該是用來強制刷新剩余的frames,后面應該會突出這個變量的作用,先不急;
3.從mCblk->mFlags的地方獲取flags,這里是0x0;
4.第一次來,肯定走mAudioRecord->start();
5.如果start失敗了,會重新調用restoreRecord_l函數,再次建立輸入流通道,這個函數在前一篇文章已經分析過了;
6.調用AudioRecordThread線程的resume函數;
這里我們主要分析第4、6步;
首先分析下AudioRecord.cpp::start()的第4步:mAudioRecord->start()
mAudioRecord是sp<IAudioRecord>類型的,也就是說他是Binder中的Bp端,我們需要找到BnAudioRecord,可以在AudioFlinger.h中找到Bn端的定義
frameworks\av\services\audioflinger\AudioFlinger.h
// server side of the client's IAudioRecord
class RecordHandle : public android::BnAudioRecord {
public:
RecordHandle(const sp<RecordThread::RecordTrack>& recordTrack);
virtual ~RecordHandle();
virtual status_t start(int /*AudioSystem::sync_event_t*/ event, int triggerSession);
virtual void stop();
virtual status_t onTransact(
uint32_t code, const Parcel& data, Parcel* reply, uint32_t flags);
private:
const sp<RecordThread::RecordTrack> mRecordTrack;
// for use from destructor
void stop_nonvirtual();
};
所以我們繼續找RecordHandle類是在哪里實現的,同時,這里可以看到除了start方法以外還有stop方法。
frameworks\av\services\audioflinger\Tracks.cpp
status_t AudioFlinger::RecordHandle::start(int /*AudioSystem::sync_event_t*/ event,
int triggerSession) {
return mRecordTrack->start((AudioSystem::sync_event_t)event, triggerSession);
}
在AudioFlinger.h文件中可以看到const sp<RecordThread::RecordTrack> mRecordTrack;他還是在Tracks.cpp中實現的,繼續往下走
status_t AudioFlinger::RecordThread::RecordTrack::start(AudioSystem::sync_event_t event,
int triggerSession)
{
sp<ThreadBase> thread = mThread.promote();
if (thread != 0) {
RecordThread *recordThread = (RecordThread *)thread.get();
return recordThread->start(this, event, triggerSession);
} else {
return BAD_VALUE;
}
}
這里的Thread是在AudioRecord.cpp::openRecord_l()中調用createRecordTrack_l的Thread對象,再深入一點,在thread->createRecordTrack_l方法中調用了new RecordTrack(this,...),而RecordTrack是繼承TrackBase的,在TrackBase父類的構造函數中TrackBase(ThreadBase *thread,...): RefBase(), mThread(thread),...{},這個父類的實現也是在Tracks.cpp。所以這里的mThread就是RecordThread
所以這里繼續調用RecordThread的start方法
frameworks\av\services\audioflinger\Threads.cpp
status_t AudioFlinger::RecordThread::start(RecordThread::RecordTrack* recordTrack,
AudioSystem::sync_event_t event,
int triggerSession)
{
sp<ThreadBase> strongMe = this;
status_t status = NO_ERROR;
if (event == AudioSystem::SYNC_EVENT_NONE) {
recordTrack->clearSyncStartEvent();
} else if (event != AudioSystem::SYNC_EVENT_SAME) {
recordTrack->mSyncStartEvent = mAudioFlinger->createSyncEvent(event,
triggerSession,
recordTrack->sessionId(),
syncStartEventCallback,
recordTrack);
// Sync event can be cancelled by the trigger session if the track is not in a
// compatible state in which case we start record immediately
if (recordTrack->mSyncStartEvent->isCancelled()) {
recordTrack->clearSyncStartEvent();
} else {
// do not wait for the event for more than AudioSystem::kSyncRecordStartTimeOutMs
recordTrack->mFramesToDrop = -
((AudioSystem::kSyncRecordStartTimeOutMs * recordTrack->mSampleRate) / 1000);
}
}
{
// This section is a rendezvous between binder thread executing start() and RecordThread
AutoMutex lock(mLock);
if (mActiveTracks.indexOf(recordTrack) >= 0) {
if (recordTrack->mState == TrackBase::PAUSING) {
ALOGV("active record track PAUSING -> ACTIVE");
recordTrack->mState = TrackBase::ACTIVE;
} else {
ALOGV("active record track state %d", recordTrack->mState);
}
return status;
}
// TODO consider other ways of handling this, such as changing the state to :STARTING and
// adding the track to mActiveTracks after returning from AudioSystem::startInput(),
// or using a separate command thread
recordTrack->mState = TrackBase::STARTING_1;
mActiveTracks.add(recordTrack);
mActiveTracksGen++;
status_t status = NO_ERROR;
if (recordTrack->isExternalTrack()) {
mLock.unlock();
status = AudioSystem::startInput(mId, (audio_session_t)recordTrack->sessionId());
mLock.lock();
// FIXME should verify that recordTrack is still in mActiveTracks
if (status != NO_ERROR) {//0
mActiveTracks.remove(recordTrack);
mActiveTracksGen++;
recordTrack->clearSyncStartEvent();
ALOGV("RecordThread::start error %d", status);
return status;
}
}
// Catch up with current buffer indices if thread is already running.
// This is what makes a new client discard all buffered data. If the track's mRsmpInFront
// was initialized to some value closer to the thread's mRsmpInFront, then the track could
// see previously buffered data before it called start(), but with greater risk of overrun.
recordTrack->mRsmpInFront = mRsmpInRear;
recordTrack->mRsmpInUnrel = 0;
// FIXME why reset?
if (recordTrack->mResampler != NULL) {
recordTrack->mResampler->reset();
}
recordTrack->mState = TrackBase::STARTING_2;
// signal thread to start
mWaitWorkCV.broadcast();
if (mActiveTracks.indexOf(recordTrack) < 0) {
ALOGV("Record failed to start");
status = BAD_VALUE;
goto startError;
}
return status;
}
startError:
if (recordTrack->isExternalTrack()) {
AudioSystem::stopInput(mId, (audio_session_t)recordTrack->sessionId());
}
recordTrack->clearSyncStartEvent();
// FIXME I wonder why we do not reset the state here?
return status;
}
在這個函數中主要的工作如下:
1.判斷傳過來的event的值,從AudioRecord.java可以看到他一直是SYNC_EVENT_NONE,所以這里就清除SyncStartEvent;
2.判斷在mActiveTracks集合中傳過來的recordTrack是否是第一個,而我們這是第一次來,肯定會是第一個,而如果不是第一個,也就是說之前因為某種狀態已經開始了錄音,所以再判斷是否是PAUSING狀態,更新狀態到ACTIVE,然后直接return;
3.設置recordTrack的狀態為STARTING_1,然后加到mActiveTracks集合中,如果此時再去indexOf的話,肯定就是1了;
4.判斷recordTrack是否是外部的Track,而isExternalTrack的定義如下:
bool isTimedTrack() const { return (mType == TYPE_TIMED); }
bool isOutputTrack() const { return (mType == TYPE_OUTPUT); }
bool isPatchTrack() const { return (mType == TYPE_PATCH); }
bool isExternalTrack() const { return !isOutputTrack() && !isPatchTrack(); }
再回憶下,我們在new RecordTrack的時候傳入的mType是TrackBase::TYPE_DEFAULT,所以這個recordTrack是外部的Track;
5.確定是ExternalTrack,那么就會調用AudioSystem::startInput方法開始采集數據,這個sessionId就是上一篇文章中出現的那個了,而對於這個mId,在AudioSystem::startInput中他的類型是audio_io_handle_t,在上一篇文章中,這個io_handle是通過AudioSystem::getInputForAttr獲取到的,獲取到之后通過checkRecordThread_l(input)獲取到了一個RecordThread對象,我們看下RecordThread類:class RecordThread : public ThreadBase,再看下ThreadBase父類,父類的構造函數實現在Threads.cpp文件中,在這里我們發現把input賦值給了mId,也就是說,調用AudioSystem::startInput函數的參數,就是之前建立的輸入流input以及生成的sessionId了。
AudioFlinger::ThreadBase::ThreadBase(const sp<AudioFlinger>& audioFlinger, audio_io_handle_t id,
audio_devices_t outDevice, audio_devices_t inDevice, type_t type)
: Thread(false /*canCallJava*/),
mType(type),
mAudioFlinger(audioFlinger),
// mSampleRate, mFrameCount, mChannelMask, mChannelCount, mFrameSize, mFormat, mBufferSize
// are set by PlaybackThread::readOutputParameters_l() or
// RecordThread::readInputParameters_l()
//FIXME: mStandby should be true here. Is this some kind of hack?
mStandby(false), mOutDevice(outDevice), mInDevice(inDevice),
mAudioSource(AUDIO_SOURCE_DEFAULT), mId(id),
// mName will be set by concrete (non-virtual) subclass
mDeathRecipient(new PMDeathRecipient(this))
{
}
6.如果mRsmpInRear不為null的話,就重置mRsmpInFront等緩沖區索引;這里顯然還沒開始錄音,所以mRsmpInRear是null的;
7.設置recordTrack的狀態為STARTING_2,然后調用mWaitWorkCV.broadcast()廣播通知所有的線程開始工作。注意:這里不得不提前劇透下,在AudioSystem::startInput中,AudioFlinger::RecordThread已經開始跑起來了,所以其實broadcast對RecordThread是沒有作用的,並且,需要特別注意的是,這里更新了recordTrack->mState為STARTING_2,之前在加入mActiveTracks時的狀態是STARTING_1,這個地方比較有意思,這里先標記下,到時候在分析RecordThread的時候揭曉答案;
8.判斷下recordTrack是否已經加到mActiveTracks集合中了,如果沒有的話,就說明start失敗了,需要stopInput等;
接下來繼續分析AudioSystem::startInput方法
frameworks\av\media\libmedia\AudioSystem.cpp
status_t AudioSystem::startInput(audio_io_handle_t input,
audio_session_t session)
{
const sp<IAudioPolicyService>& aps = AudioSystem::get_audio_policy_service();
if (aps == 0) return PERMISSION_DENIED;
return aps->startInput(input, session);
}
繼續調用AudioPolicyService的startInput方法
frameworks\av\services\audiopolicy\AudioPolicyInterfaceImpl.cpp
status_t AudioPolicyService::startInput(audio_io_handle_t input,
audio_session_t session)
{
if (mAudioPolicyManager == NULL) {
return NO_INIT;
}
Mutex::Autolock _l(mLock);
return mAudioPolicyManager->startInput(input, session);
}
繼續轉發
frameworks\av\services\audiopolicy\AudioPolicyManager.cpp
status_t AudioPolicyManager::startInput(audio_io_handle_t input,
audio_session_t session)
{
ssize_t index = mInputs.indexOfKey(input);
if (index < 0) {
ALOGW("startInput() unknown input %d", input);
return BAD_VALUE;
}
sp<AudioInputDescriptor> inputDesc = mInputs.valueAt(index);
index = inputDesc->mSessions.indexOf(session);
if (index < 0) {
ALOGW("startInput() unknown session %d on input %d", session, input);
return BAD_VALUE;
}
// virtual input devices are compatible with other input devices
if (!isVirtualInputDevice(inputDesc->mDevice)) {
// for a non-virtual input device, check if there is another (non-virtual) active input
audio_io_handle_t activeInput = getActiveInput();
if (activeInput != 0 && activeInput != input) {
// If the already active input uses AUDIO_SOURCE_HOTWORD then it is closed,
// otherwise the active input continues and the new input cannot be started.
sp<AudioInputDescriptor> activeDesc = mInputs.valueFor(activeInput);
if (activeDesc->mInputSource == AUDIO_SOURCE_HOTWORD) {
ALOGW("startInput(%d) preempting low-priority input %d", input, activeInput);
stopInput(activeInput, activeDesc->mSessions.itemAt(0));
releaseInput(activeInput, activeDesc->mSessions.itemAt(0));
} else {
ALOGE("startInput(%d) failed: other input %d already started", input, activeInput);
return INVALID_OPERATION;
}
}
}
if (inputDesc->mRefCount == 0) {
if (activeInputsCount() == 0) {
SoundTrigger::setCaptureState(true);
}
setInputDevice(input, getNewInputDevice(input), true /* force */);
// automatically enable the remote submix output when input is started if not
// used by a policy mix of type MIX_TYPE_RECORDERS
// For remote submix (a virtual device), we open only one input per capture request.
if (audio_is_remote_submix_device(inputDesc->mDevice)) {
ALOGV("audio_is_remote_submix_device(inputDesc->mDevice)");
String8 address = String8("");
if (inputDesc->mPolicyMix == NULL) {
address = String8("0");
} else if (inputDesc->mPolicyMix->mMixType == MIX_TYPE_PLAYERS) {
address = inputDesc->mPolicyMix->mRegistrationId;
}
if (address != "") {
setDeviceConnectionStateInt(AUDIO_DEVICE_OUT_REMOTE_SUBMIX,
AUDIO_POLICY_DEVICE_STATE_AVAILABLE,
address);
}
}
}
ALOGV("AudioPolicyManager::startInput() input source = %d", inputDesc->mInputSource);
inputDesc->mRefCount++;
return NO_ERROR;
}
在這個函數中主要工作如下:
1.通過input找到mInputs集合中的位置,並獲取他的inputDesc;
2.判斷input設備是否是虛擬設備,若不是則再判斷是否存在active的設備,我們第一次來,不存在的!
3.第一次來嘛,所以會調用SoundTrigger::setCaptureState(true),不過這個是和語音識別有關系,這里就不多說了;
4.繼續調用setInputDevice函數,其中getNewInputDevice函數的作用是根據input獲取audio_devices_t設備,同樣,這個設備在上一篇文章中的AudioPolicyManager::getInputForAttr方法中通過getDeviceAndMixForInputSource獲取到的,即AUDIO_DEVICE_IN_BUILTIN_MIC內置MIC設備,同時在該函數最后更新了inputDesc->mDevice;
5.判斷是否是remote_submix設備,然后做相應處理;
6.inputDesc的mRefCount計數+1;
繼續分析setInputDevice函數
status_t AudioPolicyManager::setInputDevice(audio_io_handle_t input,
audio_devices_t device,
bool force,
audio_patch_handle_t *patchHandle)
{
status_t status = NO_ERROR;
sp<AudioInputDescriptor> inputDesc = mInputs.valueFor(input);
if ((device != AUDIO_DEVICE_NONE) && ((device != inputDesc->mDevice) || force)) {
inputDesc->mDevice = device;
DeviceVector deviceList = mAvailableInputDevices.getDevicesFromType(device);
if (!deviceList.isEmpty()) {
struct audio_patch patch;
inputDesc->toAudioPortConfig(&patch.sinks[0]);
// AUDIO_SOURCE_HOTWORD is for internal use only:
// handled as AUDIO_SOURCE_VOICE_RECOGNITION by the audio HAL
if (patch.sinks[0].ext.mix.usecase.source == AUDIO_SOURCE_HOTWORD &&
!inputDesc->mIsSoundTrigger) {
patch.sinks[0].ext.mix.usecase.source = AUDIO_SOURCE_VOICE_RECOGNITION;
}
patch.num_sinks = 1;
//only one input device for now
deviceList.itemAt(0)->toAudioPortConfig(&patch.sources[0]);
patch.num_sources = 1;
ssize_t index;
if (patchHandle && *patchHandle != AUDIO_PATCH_HANDLE_NONE) {
index = mAudioPatches.indexOfKey(*patchHandle);
} else {
index = mAudioPatches.indexOfKey(inputDesc->mPatchHandle);
}
sp< AudioPatch> patchDesc;
audio_patch_handle_t afPatchHandle = AUDIO_PATCH_HANDLE_NONE;
if (index >= 0) {
patchDesc = mAudioPatches.valueAt(index);
afPatchHandle = patchDesc->mAfPatchHandle;
}
status_t status = mpClientInterface->createAudioPatch(&patch,
&afPatchHandle,
0);
if (status == NO_ERROR) {
if (index < 0) {
patchDesc = new AudioPatch((audio_patch_handle_t)nextUniqueId(),
&patch, mUidCached);
addAudioPatch(patchDesc->mHandle, patchDesc);
} else {
patchDesc->mPatch = patch;
}
patchDesc->mAfPatchHandle = afPatchHandle;
patchDesc->mUid = mUidCached;
if (patchHandle) {
*patchHandle = patchDesc->mHandle;
}
inputDesc->mPatchHandle = patchDesc->mHandle;
nextAudioPortGeneration();
mpClientInterface->onAudioPatchListUpdate();
}
}
}
return status;
}
在這個函數中主要的工作如下:
1.這里已經知道device與inputDesc->mDevice都已經是AUDIO_DEVICE_IN_BUILTIN_MIC,但是force是true;
2.通過device獲取mAvailableInputDevices集合中的所有設備,到此刻,我們還只向該集合中添加一個device;
3.這里我們分析下struct audio_patch;他定義在system\core\include\system\audio.h,這里對audio_patch中的source與sinks進行賦值,注意一點,他把mId(audio_io_handle_t)賦值給了id,然后在這個audio_patch中保存了InputSource,sample_rate,channel_mask,format,hw_module等等,幾乎都存進去了;
struct audio_patch {
audio_patch_handle_t id; /* patch unique ID */
unsigned int num_sources; /* number of sources in following array */
struct audio_port_config sources[AUDIO_PATCH_PORTS_MAX];
unsigned int num_sinks; /* number of sinks in following array */
struct audio_port_config sinks[AUDIO_PATCH_PORTS_MAX];
};
struct audio_port_config {
audio_port_handle_t id; /* port unique ID */
audio_port_role_t role; /* sink or source */
audio_port_type_t type; /* device, mix ... */
unsigned int config_mask; /* e.g AUDIO_PORT_CONFIG_ALL */
unsigned int sample_rate; /* sampling rate in Hz */
audio_channel_mask_t channel_mask; /* channel mask if applicable */
audio_format_t format; /* format if applicable */
struct audio_gain_config gain; /* gain to apply if applicable */
union {
struct audio_port_config_device_ext device; /* device specific info */
struct audio_port_config_mix_ext mix; /* mix specific info */
struct audio_port_config_session_ext session; /* session specific info */
} ext;
};
struct audio_port_config_device_ext {
audio_module_handle_t hw_module; /* module the device is attached to */
audio_devices_t type; /* device type (e.g AUDIO_DEVICE_OUT_SPEAKER) */
char address[AUDIO_DEVICE_MAX_ADDRESS_LEN]; /* device address. "" if N/A */
};
struct audio_port_config_mix_ext {
audio_module_handle_t hw_module; /* module the stream is attached to */
audio_io_handle_t handle; /* I/O handle of the input/output stream */
union {
//TODO: change use case for output streams: use strategy and mixer attributes
audio_stream_type_t stream;
audio_source_t source;
} usecase;
};
4.調用mpClientInterface->createAudioPatch創建Audio通路;
5.更新patchDesc的屬性;
6.如果createAudioPatch的status是NO_ERROR的話,就調用mpClientInterface->onAudioPatchListUpdate更新AudioPatch列表;
這里我們着重分析第4、6步:
首先分析下AudioPolicyManager.cpp的AudioPolicyManager::setInputDevice的第4步:創建Audio通路
frameworks\av\services\audiopolicy\AudioPolicyClientImpl.cpp
status_t AudioPolicyService::AudioPolicyClient::createAudioPatch(const struct audio_patch *patch,
audio_patch_handle_t *handle,
int delayMs)
{
return mAudioPolicyService->clientCreateAudioPatch(patch, handle, delayMs);
}
繼續向下
frameworks\av\services\audiopolicy\AudioPolicyService.cpp
status_t AudioPolicyService::clientCreateAudioPatch(const struct audio_patch *patch,
audio_patch_handle_t *handle,
int delayMs)
{
return mAudioCommandThread->createAudioPatchCommand(patch, handle, delayMs);
}
還是在這個文件中
status_t AudioPolicyService::AudioCommandThread::createAudioPatchCommand(
const struct audio_patch *patch,
audio_patch_handle_t *handle,
int delayMs)
{
status_t status = NO_ERROR;
sp<AudioCommand> command = new AudioCommand();
command->mCommand = CREATE_AUDIO_PATCH;
CreateAudioPatchData *data = new CreateAudioPatchData();
data->mPatch = *patch;
data->mHandle = *handle;
command->mParam = data;
command->mWaitStatus = true;
ALOGV("AudioCommandThread() adding create patch delay %d", delayMs);
status = sendCommand(command, delayMs);
if (status == NO_ERROR) {
*handle = data->mHandle;
}
return status;
}
后面就是把audio_patch封裝下,然后加入到AudioCommands隊列中去,所以接下來直接看threadLoop中是怎么處理的
bool AudioPolicyService::AudioCommandThread::threadLoop()
{
nsecs_t waitTime = INT64_MAX;
mLock.lock();
while (!exitPending())
{
sp<AudioPolicyService> svc;
while (!mAudioCommands.isEmpty() && !exitPending()) {
nsecs_t curTime = systemTime();
// commands are sorted by increasing time stamp: execute them from index 0 and up
if (mAudioCommands[0]->mTime <= curTime) {
sp<AudioCommand> command = mAudioCommands[0];
mAudioCommands.removeAt(0);
mLastCommand = command;
switch (command->mCommand) {
case START_TONE: {
mLock.unlock();
ToneData *data = (ToneData *)command->mParam.get();
ALOGV("AudioCommandThread() processing start tone %d on stream %d",
data->mType, data->mStream);
delete mpToneGenerator;
mpToneGenerator = new ToneGenerator(data->mStream, 1.0);
mpToneGenerator->startTone(data->mType);
mLock.lock();
}break;
case STOP_TONE: {
mLock.unlock();
ALOGV("AudioCommandThread() processing stop tone");
if (mpToneGenerator != NULL) {
mpToneGenerator->stopTone();
delete mpToneGenerator;
mpToneGenerator = NULL;
}
mLock.lock();
}break;
case SET_VOLUME: {
VolumeData *data = (VolumeData *)command->mParam.get();
ALOGV("AudioCommandThread() processing set volume stream %d, \
volume %f, output %d", data->mStream, data->mVolume, data->mIO);
command->mStatus = AudioSystem::setStreamVolume(data->mStream,
data->mVolume,
data->mIO);
}break;
case SET_PARAMETERS: {
ParametersData *data = (ParametersData *)command->mParam.get();
ALOGV("AudioCommandThread() processing set parameters string %s, io %d",
data->mKeyValuePairs.string(), data->mIO);
command->mStatus = AudioSystem::setParameters(data->mIO, data->mKeyValuePairs);
}break;
case SET_VOICE_VOLUME: {
VoiceVolumeData *data = (VoiceVolumeData *)command->mParam.get();
ALOGV("AudioCommandThread() processing set voice volume volume %f",
data->mVolume);
command->mStatus = AudioSystem::setVoiceVolume(data->mVolume);
}break;
case STOP_OUTPUT: {
StopOutputData *data = (StopOutputData *)command->mParam.get();
ALOGV("AudioCommandThread() processing stop output %d",
data->mIO);
svc = mService.promote();
if (svc == 0) {
break;
}
mLock.unlock();
svc->doStopOutput(data->mIO, data->mStream, data->mSession);
mLock.lock();
}break;
case RELEASE_OUTPUT: {
ReleaseOutputData *data = (ReleaseOutputData *)command->mParam.get();
ALOGV("AudioCommandThread() processing release output %d",
data->mIO);
svc = mService.promote();
if (svc == 0) {
break;
}
mLock.unlock();
svc->doReleaseOutput(data->mIO, data->mStream, data->mSession);
mLock.lock();
}break;
case CREATE_AUDIO_PATCH: {
CreateAudioPatchData *data = (CreateAudioPatchData *)command->mParam.get();
ALOGV("AudioCommandThread() processing create audio patch");
sp<IAudioFlinger> af = AudioSystem::get_audio_flinger();
if (af == 0) {
command->mStatus = PERMISSION_DENIED;
} else {
command->mStatus = af->createAudioPatch(&data->mPatch, &data->mHandle);
}
} break;
case RELEASE_AUDIO_PATCH: {
ReleaseAudioPatchData *data = (ReleaseAudioPatchData *)command->mParam.get();
ALOGV("AudioCommandThread() processing release audio patch");
sp<IAudioFlinger> af = AudioSystem::get_audio_flinger();
if (af == 0) {
command->mStatus = PERMISSION_DENIED;
} else {
command->mStatus = af->releaseAudioPatch(data->mHandle);
}
} break;
case UPDATE_AUDIOPORT_LIST: {
ALOGV("AudioCommandThread() processing update audio port list");
svc = mService.promote();
if (svc == 0) {
break;
}
mLock.unlock();
svc->doOnAudioPortListUpdate();
mLock.lock();
}break;
case UPDATE_AUDIOPATCH_LIST: {
ALOGV("AudioCommandThread() processing update audio patch list");
svc = mService.promote();
if (svc == 0) {
break;
}
mLock.unlock();
svc->doOnAudioPatchListUpdate();
mLock.lock();
}break;
case SET_AUDIOPORT_CONFIG: {
SetAudioPortConfigData *data = (SetAudioPortConfigData *)command->mParam.get();
ALOGV("AudioCommandThread() processing set port config");
sp<IAudioFlinger> af = AudioSystem::get_audio_flinger();
if (af == 0) {
command->mStatus = PERMISSION_DENIED;
} else {
command->mStatus = af->setAudioPortConfig(&data->mConfig);
}
} break;
default:
ALOGW("AudioCommandThread() unknown command %d", command->mCommand);
}
{
Mutex::Autolock _l(command->mLock);
if (command->mWaitStatus) {
command->mWaitStatus = false;
command->mCond.signal();
}
}
waitTime = INT64_MAX;
} else {
waitTime = mAudioCommands[0]->mTime - curTime;
break;
}
}
// release mLock before releasing strong reference on the service as
// AudioPolicyService destructor calls AudioCommandThread::exit() which acquires mLock.
mLock.unlock();
svc.clear();
mLock.lock();
if (!exitPending() && mAudioCommands.isEmpty()) {
// release delayed commands wake lock
release_wake_lock(mName.string());
ALOGV("AudioCommandThread() going to sleep");
mWaitWorkCV.waitRelative(mLock, waitTime);
ALOGV("AudioCommandThread() waking up");
}
}
// release delayed commands wake lock before quitting
if (!mAudioCommands.isEmpty()) {
release_wake_lock(mName.string());
}
mLock.unlock();
return false;
}
這里直接看CREATE_AUDIO_PATCH的分支,他調用了AF端的af->createAudioPatch函數,同樣在這個loop中,也有后面的UPDATE_AUDIOPATCH_LIST分支
frameworks\av\services\audioflinger\PatchPanel.cpp
status_t AudioFlinger::createAudioPatch(const struct audio_patch *patch,
audio_patch_handle_t *handle)
{
Mutex::Autolock _l(mLock);
if (mPatchPanel != 0) {
return mPatchPanel->createAudioPatch(patch, handle);
}
return NO_INIT;
}
繼續往下走 (唉,老實說我都不想走了,繞來繞去的。。
status_t AudioFlinger::PatchPanel::createAudioPatch(const struct audio_patch *patch,
audio_patch_handle_t *handle)
{
ALOGV("createAudioPatch() num_sources %d num_sinks %d handle %d",
patch->num_sources, patch->num_sinks, *handle);
status_t status = NO_ERROR;
audio_patch_handle_t halHandle = AUDIO_PATCH_HANDLE_NONE;
sp<AudioFlinger> audioflinger = mAudioFlinger.promote();
if (audioflinger == 0) {
return NO_INIT;
}
if (handle == NULL || patch == NULL) {
return BAD_VALUE;
}
if (patch->num_sources == 0 || patch->num_sources > AUDIO_PATCH_PORTS_MAX ||
patch->num_sinks == 0 || patch->num_sinks > AUDIO_PATCH_PORTS_MAX) {
return BAD_VALUE;
}
// limit number of sources to 1 for now or 2 sources for special cross hw module case.
// only the audio policy manager can request a patch creation with 2 sources.
if (patch->num_sources > 2) {
return INVALID_OPERATION;
}
if (*handle != AUDIO_PATCH_HANDLE_NONE) {
for (size_t index = 0; *handle != 0 && index < mPatches.size(); index++) {
if (*handle == mPatches[index]->mHandle) {
ALOGV("createAudioPatch() removing patch handle %d", *handle);
halHandle = mPatches[index]->mHalHandle;
Patch *removedPatch = mPatches[index];
mPatches.removeAt(index);
delete removedPatch;
break;
}
}
}
Patch *newPatch = new Patch(patch);
switch (patch->sources[0].type) {
case AUDIO_PORT_TYPE_DEVICE: {
audio_module_handle_t srcModule = patch->sources[0].ext.device.hw_module;
ssize_t index = audioflinger->mAudioHwDevs.indexOfKey(srcModule);
if (index < 0) {
ALOGW("createAudioPatch() bad src hw module %d", srcModule);
status = BAD_VALUE;
goto exit;
}
AudioHwDevice *audioHwDevice = audioflinger->mAudioHwDevs.valueAt(index);
for (unsigned int i = 0; i < patch->num_sinks; i++) {
// support only one sink if connection to a mix or across HW modules
if ((patch->sinks[i].type == AUDIO_PORT_TYPE_MIX ||
patch->sinks[i].ext.mix.hw_module != srcModule) &&
patch->num_sinks > 1) {
status = INVALID_OPERATION;
goto exit;
}
// reject connection to different sink types
if (patch->sinks[i].type != patch->sinks[0].type) {
ALOGW("createAudioPatch() different sink types in same patch not supported");
status = BAD_VALUE;
goto exit;
}
// limit to connections between devices and input streams for HAL before 3.0
if (patch->sinks[i].ext.mix.hw_module == srcModule &&
(audioHwDevice->version() < AUDIO_DEVICE_API_VERSION_3_0) &&
(patch->sinks[i].type != AUDIO_PORT_TYPE_MIX)) {
ALOGW("createAudioPatch() invalid sink type %d for device source",
patch->sinks[i].type);
status = BAD_VALUE;
goto exit;
}
}
if (patch->sinks[0].ext.device.hw_module != srcModule) {
// limit to device to device connection if not on same hw module
if (patch->sinks[0].type != AUDIO_PORT_TYPE_DEVICE) {
ALOGW("createAudioPatch() invalid sink type for cross hw module");
status = INVALID_OPERATION;
goto exit;
}
// special case num sources == 2 -=> reuse an exiting output mix to connect to the
// sink
if (patch->num_sources == 2) {
if (patch->sources[1].type != AUDIO_PORT_TYPE_MIX ||
patch->sinks[0].ext.device.hw_module !=
patch->sources[1].ext.mix.hw_module) {
ALOGW("createAudioPatch() invalid source combination");
status = INVALID_OPERATION;
goto exit;
}
sp<ThreadBase> thread =
audioflinger->checkPlaybackThread_l(patch->sources[1].ext.mix.handle);
newPatch->mPlaybackThread = (MixerThread *)thread.get();
if (thread == 0) {
ALOGW("createAudioPatch() cannot get playback thread");
status = INVALID_OPERATION;
goto exit;
}
} else {
audio_config_t config = AUDIO_CONFIG_INITIALIZER;
audio_devices_t device = patch->sinks[0].ext.device.type;
String8 address = String8(patch->sinks[0].ext.device.address);
audio_io_handle_t output = AUDIO_IO_HANDLE_NONE;
newPatch->mPlaybackThread = audioflinger->openOutput_l(
patch->sinks[0].ext.device.hw_module,
&output,
&config,
device,
address,
AUDIO_OUTPUT_FLAG_NONE);
ALOGV("audioflinger->openOutput_l() returned %p",
newPatch->mPlaybackThread.get());
if (newPatch->mPlaybackThread == 0) {
status = NO_MEMORY;
goto exit;
}
}
uint32_t channelCount = newPatch->mPlaybackThread->channelCount();
audio_devices_t device = patch->sources[0].ext.device.type;
String8 address = String8(patch->sources[0].ext.device.address);
audio_config_t config = AUDIO_CONFIG_INITIALIZER;
audio_channel_mask_t inChannelMask = audio_channel_in_mask_from_count(channelCount);
config.sample_rate = newPatch->mPlaybackThread->sampleRate();
config.channel_mask = inChannelMask;
config.format = newPatch->mPlaybackThread->format();
audio_io_handle_t input = AUDIO_IO_HANDLE_NONE;
newPatch->mRecordThread = audioflinger->openInput_l(srcModule,
&input,
&config,
device,
address,
AUDIO_SOURCE_MIC,
AUDIO_INPUT_FLAG_NONE);
ALOGV("audioflinger->openInput_l() returned %p inChannelMask %08x",
newPatch->mRecordThread.get(), inChannelMask);
if (newPatch->mRecordThread == 0) {
status = NO_MEMORY;
goto exit;
}
status = createPatchConnections(newPatch, patch);
if (status != NO_ERROR) {
goto exit;
}
} else {
if (audioHwDevice->version() >= AUDIO_DEVICE_API_VERSION_3_0) {
if (patch->sinks[0].type == AUDIO_PORT_TYPE_MIX) {
sp<ThreadBase> thread = audioflinger->checkRecordThread_l(
patch->sinks[0].ext.mix.handle);
if (thread == 0) {
ALOGW("createAudioPatch() bad capture I/O handle %d",
patch->sinks[0].ext.mix.handle);
status = BAD_VALUE;
goto exit;
}
status = thread->sendCreateAudioPatchConfigEvent(patch, &halHandle);
} else {
audio_hw_device_t *hwDevice = audioHwDevice->hwDevice();
status = hwDevice->create_audio_patch(hwDevice,
patch->num_sources,
patch->sources,
patch->num_sinks,
patch->sinks,
&halHandle);
}
} else {
sp<ThreadBase> thread = audioflinger->checkRecordThread_l(
patch->sinks[0].ext.mix.handle);
if (thread == 0) {
ALOGW("createAudioPatch() bad capture I/O handle %d",
patch->sinks[0].ext.mix.handle);
status = BAD_VALUE;
goto exit;
}
char *address;
if (strcmp(patch->sources[0].ext.device.address, "") != 0) {
address = audio_device_address_to_parameter(
patch->sources[0].ext.device.type,
patch->sources[0].ext.device.address);
} else {
address = (char *)calloc(1, 1);
}
AudioParameter param = AudioParameter(String8(address));
free(address);
param.addInt(String8(AUDIO_PARAMETER_STREAM_ROUTING),
(int)patch->sources[0].ext.device.type);
param.addInt(String8(AUDIO_PARAMETER_STREAM_INPUT_SOURCE),
(int)patch->sinks[0].ext.mix.usecase.source);
ALOGV("createAudioPatch() AUDIO_PORT_TYPE_DEVICE setParameters %s",
param.toString().string());
status = thread->setParameters(param.toString());
}
}
} break;
case AUDIO_PORT_TYPE_MIX: {
audio_module_handle_t srcModule = patch->sources[0].ext.mix.hw_module;
ssize_t index = audioflinger->mAudioHwDevs.indexOfKey(srcModule);
if (index < 0) {
ALOGW("createAudioPatch() bad src hw module %d", srcModule);
status = BAD_VALUE;
goto exit;
}
// limit to connections between devices and output streams
for (unsigned int i = 0; i < patch->num_sinks; i++) {
if (patch->sinks[i].type != AUDIO_PORT_TYPE_DEVICE) {
ALOGW("createAudioPatch() invalid sink type %d for mix source",
patch->sinks[i].type);
status = BAD_VALUE;
goto exit;
}
// limit to connections between sinks and sources on same HW module
if (patch->sinks[i].ext.device.hw_module != srcModule) {
status = BAD_VALUE;
goto exit;
}
}
AudioHwDevice *audioHwDevice = audioflinger->mAudioHwDevs.valueAt(index);
sp<ThreadBase> thread =
audioflinger->checkPlaybackThread_l(patch->sources[0].ext.mix.handle);
if (thread == 0) {
ALOGW("createAudioPatch() bad playback I/O handle %d",
patch->sources[0].ext.mix.handle);
status = BAD_VALUE;
goto exit;
}
if (audioHwDevice->version() >= AUDIO_DEVICE_API_VERSION_3_0) {
status = thread->sendCreateAudioPatchConfigEvent(patch, &halHandle);
} else {
audio_devices_t type = AUDIO_DEVICE_NONE;
for (unsigned int i = 0; i < patch->num_sinks; i++) {
type |= patch->sinks[i].ext.device.type;
}
char *address;
if (strcmp(patch->sinks[0].ext.device.address, "") != 0) {
//FIXME: we only support address on first sink with HAL version < 3.0
address = audio_device_address_to_parameter(
patch->sinks[0].ext.device.type,
patch->sinks[0].ext.device.address);
} else {
address = (char *)calloc(1, 1);
}
AudioParameter param = AudioParameter(String8(address));
free(address);
param.addInt(String8(AUDIO_PARAMETER_STREAM_ROUTING), (int)type);
status = thread->setParameters(param.toString());
}
} break;
default:
status = BAD_VALUE;
goto exit;
}
exit:
ALOGV("createAudioPatch() status %d", status);
if (status == NO_ERROR) {
*handle = audioflinger->nextUniqueId();
newPatch->mHandle = *handle;
newPatch->mHalHandle = halHandle;
mPatches.add(newPatch);
ALOGV("createAudioPatch() added new patch handle %d halHandle %d", *handle, halHandle);
} else {
clearPatchConnections(newPatch);
delete newPatch;
}
return status;
}
在這個函數中主要工作如下:
1.在AudioPolicyManager::setInputDevice()函數中,num_sources與num_sinks都為1;
2.當halHandle不是AUDIO_PATCH_HANDLE_NONE的時候,就去mPatches集合中找到這個halHandle,然后刪除他,而在這里,halHandle就是AUDIO_PATCH_HANDLE_NONE;
3.這里的source.type為AUDIO_PORT_TYPE_DEVICE,獲取patch中的audio_module_handle_t,獲取AF端的AudioHwDevice,后面有個for循環判斷,根據之前的參數設定,均不會進到if里面;
4.判斷source里的audio_module_handle_t與sink里的是否一致,那肯定一致噻;
5.再判斷hal代碼中的version版本,我們看下hardware\aw\audio\tulip\audio_hw.c的adev->hw_device.common.version = AUDIO_DEVICE_API_VERSION_2_0;
6.調用AF端的checkRecordThread_l函數,即通過audio_io_handle_t從mRecordThreads中獲取到RecordThread線程;
7.通過address創建一個AudioParameter對象,並把source.type與source放入AudioParameter對象中;
8.調用thread->setParameters把AudioParameter對象傳遞過去
這里我們繼續分析下第8步:thread->setParameters
frameworks\av\services\audioflinger\Threads.cpp
status_t AudioFlinger::ThreadBase::setParameters(const String8& keyValuePairs)
{
status_t status;
Mutex::Autolock _l(mLock);
return sendSetParameterConfigEvent_l(keyValuePairs);
}
emmm,感覺又要繞上一大圈
status_t AudioFlinger::ThreadBase::sendSetParameterConfigEvent_l(const String8& keyValuePair)
{
sp<ConfigEvent> configEvent = (ConfigEvent *)new SetParameterConfigEvent(keyValuePair);
return sendConfigEvent_l(configEvent);
}
把AudioParameter對象轉化為ConfigEvent對象,繼續調用
status_t AudioFlinger::ThreadBase::sendConfigEvent_l(sp<ConfigEvent>& event)
{
status_t status = NO_ERROR;
mConfigEvents.add(event);
mWaitWorkCV.signal();
mLock.unlock();
{
Mutex::Autolock _l(event->mLock);
while (event->mWaitStatus) {
if (event->mCond.waitRelative(event->mLock, kConfigEventTimeoutNs) != NO_ERROR) {
event->mStatus = TIMED_OUT;
event->mWaitStatus = false;
}
}
status = event->mStatus;
}
mLock.lock();
return status;
}
把ConfigEvent加入到mConfigEvents中,然后調用mWaitWorkCV.signal()通知RecordThread線程可以start了,在線程中接受到mWaitWorkCV.wait(mLock);時就會回到reacquire_wakelock位置,再繼續往下,這時候調用了processConfigEvents_l,他就是用來處理ConfigEvent事件的
void AudioFlinger::ThreadBase::processConfigEvents_l()
{
bool configChanged = false;
while (!mConfigEvents.isEmpty()) {
ALOGV("processConfigEvents_l() remaining events %d", mConfigEvents.size());
sp<ConfigEvent> event = mConfigEvents[0];
mConfigEvents.removeAt(0);
switch (event->mType) {
case CFG_EVENT_PRIO: {
PrioConfigEventData *data = (PrioConfigEventData *)event->mData.get();
// FIXME Need to understand why this has to be done asynchronously
int err = requestPriority(data->mPid, data->mTid, data->mPrio,
true /*asynchronous*/);
if (err != 0) {
ALOGW("Policy SCHED_FIFO priority %d is unavailable for pid %d tid %d; error %d",
data->mPrio, data->mPid, data->mTid, err);
}
} break;
case CFG_EVENT_IO: {
IoConfigEventData *data = (IoConfigEventData *)event->mData.get();
audioConfigChanged(data->mEvent, data->mParam);
} break;
case CFG_EVENT_SET_PARAMETER: {
SetParameterConfigEventData *data = (SetParameterConfigEventData *)event->mData.get();
if (checkForNewParameter_l(data->mKeyValuePairs, event->mStatus)) {
configChanged = true;
}
} break;
case CFG_EVENT_CREATE_AUDIO_PATCH: {
CreateAudioPatchConfigEventData *data =
(CreateAudioPatchConfigEventData *)event->mData.get();
event->mStatus = createAudioPatch_l(&data->mPatch, &data->mHandle);
} break;
case CFG_EVENT_RELEASE_AUDIO_PATCH: {
ReleaseAudioPatchConfigEventData *data =
(ReleaseAudioPatchConfigEventData *)event->mData.get();
event->mStatus = releaseAudioPatch_l(data->mHandle);
} break;
default:
ALOG_ASSERT(false, "processConfigEvents_l() unknown event type %d", event->mType);
break;
}
{
Mutex::Autolock _l(event->mLock);
if (event->mWaitStatus) {
event->mWaitStatus = false;
event->mCond.signal();
}
}
ALOGV_IF(mConfigEvents.isEmpty(), "processConfigEvents_l() DONE thread %p", this);
}
if (configChanged) {
cacheParameters_l();
}
}
這里果然是一直在等待處理mConfigEvents中的事件,而這個event->mType是CFG_EVENT_SET_PARAMETER,所以繼續調用checkForNewParameter_l函數,而他肯定是調用的是RecordThread中的啦,這..繞了真·一大圈。
bool AudioFlinger::RecordThread::checkForNewParameter_l(const String8& keyValuePair,
status_t& status)
{
bool reconfig = false;
status = NO_ERROR;
audio_format_t reqFormat = mFormat;
uint32_t samplingRate = mSampleRate;
audio_channel_mask_t channelMask = audio_channel_in_mask_from_count(mChannelCount);
AudioParameter param = AudioParameter(keyValuePair);
int value;
if (param.getInt(String8(AudioParameter::keySamplingRate), value) == NO_ERROR) {
samplingRate = value;
reconfig = true;
}
if (param.getInt(String8(AudioParameter::keyFormat), value) == NO_ERROR) {
if ((audio_format_t) value != AUDIO_FORMAT_PCM_16_BIT) {
status = BAD_VALUE;
} else {
reqFormat = (audio_format_t) value;
reconfig = true;
}
}
if (param.getInt(String8(AudioParameter::keyChannels), value) == NO_ERROR) {
audio_channel_mask_t mask = (audio_channel_mask_t) value;
if (mask != AUDIO_CHANNEL_IN_MONO && mask != AUDIO_CHANNEL_IN_STEREO) {
status = BAD_VALUE;
} else {
channelMask = mask;
reconfig = true;
}
}
if (param.getInt(String8(AudioParameter::keyFrameCount), value) == NO_ERROR) {
// do not accept frame count changes if tracks are open as the track buffer
// size depends on frame count and correct behavior would not be guaranteed
// if frame count is changed after track creation
if (mActiveTracks.size() > 0) {
status = INVALID_OPERATION;
} else {
reconfig = true;
}
}
if (param.getInt(String8(AudioParameter::keyRouting), value) == NO_ERROR) {
// forward device change to effects that have requested to be
// aware of attached audio device.
for (size_t i = 0; i < mEffectChains.size(); i++) {
mEffectChains[i]->setDevice_l(value);
}
// store input device and output device but do not forward output device to audio HAL.
// Note that status is ignored by the caller for output device
// (see AudioFlinger::setParameters()
if (audio_is_output_devices(value)) {
mOutDevice = value;
status = BAD_VALUE;
} else {
mInDevice = value;
// disable AEC and NS if the device is a BT SCO headset supporting those
// pre processings
if (mTracks.size() > 0) {
bool suspend = audio_is_bluetooth_sco_device(mInDevice) &&
mAudioFlinger->btNrecIsOff();
for (size_t i = 0; i < mTracks.size(); i++) {
sp<RecordTrack> track = mTracks[i];
setEffectSuspended_l(FX_IID_AEC, suspend, track->sessionId());
setEffectSuspended_l(FX_IID_NS, suspend, track->sessionId());
}
}
}
}
if (param.getInt(String8(AudioParameter::keyInputSource), value) == NO_ERROR &&
mAudioSource != (audio_source_t)value) {
// forward device change to effects that have requested to be
// aware of attached audio device.
for (size_t i = 0; i < mEffectChains.size(); i++) {
mEffectChains[i]->setAudioSource_l((audio_source_t)value);
}
mAudioSource = (audio_source_t)value;
}
if (status == NO_ERROR) {
status = mInput->stream->common.set_parameters(&mInput->stream->common,
keyValuePair.string());
if (status == INVALID_OPERATION) {
inputStandBy();
status = mInput->stream->common.set_parameters(&mInput->stream->common,
keyValuePair.string());
}
if (reconfig) {
if (status == BAD_VALUE &&
reqFormat == mInput->stream->common.get_format(&mInput->stream->common) &&
reqFormat == AUDIO_FORMAT_PCM_16_BIT &&
(mInput->stream->common.get_sample_rate(&mInput->stream->common)
<= (2 * samplingRate)) &&
audio_channel_count_from_in_mask(
mInput->stream->common.get_channels(&mInput->stream->common)) <= FCC_2 &&
(channelMask == AUDIO_CHANNEL_IN_MONO ||
channelMask == AUDIO_CHANNEL_IN_STEREO)) {
status = NO_ERROR;
}
if (status == NO_ERROR) {
readInputParameters_l();
sendIoConfigEvent_l(AudioSystem::INPUT_CONFIG_CHANGED);
}
}
}
return reconfig;
}
在這里,獲取了到了AudioParameter的值,在前面我們知道,他只放入了Routing與InputSource的值,所以這里把patch->sources[0].ext.device.type的屬性放入mInDevice,把patch->sinks[0].ext.mix.usecase.source放入mAudioSource中,最后調用HAL層中的set_parameters函數,把mInDevice與mAudioSource設置過去,顯然這個reconfig一直是false,也就是說其他的參數並沒有改變。
hardware\aw\audio\tulip\audio_hw.c
static int in_set_parameters(struct audio_stream *stream, const char *kvpairs)
{
struct sunxi_stream_in *in = (struct sunxi_stream_in *)stream;
struct sunxi_audio_device *adev = in->dev;
struct str_parms *parms;
char *str;
char value[128];
int ret, val = 0;
bool do_standby = false;
ALOGV("in_set_parameters: %s", kvpairs);
parms = str_parms_create_str(kvpairs);
ret = str_parms_get_str(parms, AUDIO_PARAMETER_STREAM_INPUT_SOURCE, value, sizeof(value));
pthread_mutex_lock(&adev->lock);
pthread_mutex_lock(&in->lock);
if (ret >= 0) {
val = atoi(value);
/* no audio source uses val == 0 */
if ((in->source != val) && (val != 0)) {
in->source = val;
do_standby = true;
}
}
ret = str_parms_get_str(parms, AUDIO_PARAMETER_STREAM_ROUTING, value, sizeof(value));
if (ret >= 0) {
val = atoi(value) & ~AUDIO_DEVICE_BIT_IN;
if ((adev->mode != AUDIO_MODE_IN_CALL) && (in->device != val) && (val != 0)) {
in->device = val;
do_standby = true;
} else if((adev->mode == AUDIO_MODE_IN_CALL) && (in->source != val) && (val != 0)) {
in->device = val;
select_device(adev);
}
}
if (do_standby)
do_input_standby(in);
pthread_mutex_unlock(&in->lock);
pthread_mutex_unlock(&adev->lock);
str_parms_destroy(parms);
return ret;
}
1.獲取INPUT_SOURCE屬性,更新in->source,即AUDIO_SOURCE_MIC;
2.在adev_open函數中,定義了adev->mode為AUDIO_MODE_NORMAL且定義adev->in_device為AUDIO_DEVICE_IN_BUILTIN_MIC & ~AUDIO_DEVICE_BIT_IN,所以這里僅更新了輸入流的in->device,不需要select device了,這里就是內置MIC,AUDIO_DEVICE_IN_BUILTIN_MIC;
到這里,Audio輸入通路就完成了。
然后分析下AudioPolicyManager.cpp的AudioPolicyManager::setInputDevice的第6步:
這個mpClientInterface->createAudioPatch的返回值也比較長,一層一層往里面跟,可知,最后是在audio_hw.c中的in_set_parameters給賦值過去的,而ret是調用str_parms_get_str的結果
這個函數實現是在system\core\libcutils\str_parms.c文件中
int str_parms_get_str(struct str_parms *str_parms, const char *key, char *val,
int len)
{
char *value;
value = hashmapGet(str_parms->map, (void *)key);
if (value)
return strlcpy(val, value, len);
return -ENOENT;
}
這個函數的意思就是從str_parms的hashmap中把key的值提取出來並返回;而AUDIO_PARAMETER_STREAM_ROUTING這個key是在調用thread->setParameters(param.toString())之前把type的值放進去的,即
param.addInt(String8(AUDIO_PARAMETER_STREAM_ROUTING),(int)patch->sources[0].ext.device.type);所以這里的status不是NO_ERROR,所以不會去更新AudioPatch列表了。
在前面一點,我們注意到AF端的RecordThread已經開始start了,這個要還記得,但是先放在這里,后面一點再分析。
然后繼續分析AudioRecord.cpp::start()的第6步:AudioRecordThread線程的resume函數
frameworks\av\media\libmedia\AudioRecord.cpp
void AudioRecord::AudioRecordThread::resume()
{
AutoMutex _l(mMyLock);
mIgnoreNextPausedInt = true;
if (mPaused || mPausedInt) {
mPaused = false;
mPausedInt = false;
mMyCond.signal();
}
}
標記mIgnoreNextPausedInt為true,mPaused與mPausedInt都為false,然后調用mMyCond.signal()通知AudioRecordThread線程
bool AudioRecord::AudioRecordThread::threadLoop()
{
{
AutoMutex _l(mMyLock);
if (mPaused) {
mMyCond.wait(mMyLock);
// caller will check for exitPending()
return true;
}
if (mIgnoreNextPausedInt) {
mIgnoreNextPausedInt = false;
mPausedInt = false;
}
if (mPausedInt) {
if (mPausedNs > 0) {
(void) mMyCond.waitRelative(mMyLock, mPausedNs);
} else {
mMyCond.wait(mMyLock);
}
mPausedInt = false;
return true;
}
}
nsecs_t ns = mReceiver.processAudioBuffer();
switch (ns) {
case 0:
return true;
case NS_INACTIVE:
pauseInternal();
return true;
case NS_NEVER:
return false;
case NS_WHENEVER:
// FIXME increase poll interval, or make event-driven
ns = 1000000000LL;
// fall through
default:
LOG_ALWAYS_FATAL_IF(ns < 0, "processAudioBuffer() returned %" PRId64, ns);
pauseInternal(ns);
return true;
}
}
在AudioRecordThread線程中,在mMyCond.wait(mMyLock);等待signal()信號,這里我們知道mPaused為false,mIgnoreNextPausedInt也將變為false,mPausedInt也為false,所以在下一次循環中,就會調用processAudioBuffer函數,調用之后返回一個NS_WHENEVER,所以將對這個線程進行延時1000000000LL,也就是1s,然后一直循環下去,直到應用終止錄音。
接下來繼續分析下processAudioBuffer函數
nsecs_t AudioRecord::processAudioBuffer()
{
mLock.lock();
if (mAwaitBoost) {
mAwaitBoost = false;
mLock.unlock();
static const int32_t kMaxTries = 5;
int32_t tryCounter = kMaxTries;
uint32_t pollUs = 10000;
do {
int policy = sched_getscheduler(0);
if (policy == SCHED_FIFO || policy == SCHED_RR) {
break;
}
usleep(pollUs);
pollUs <<= 1;
} while (tryCounter-- > 0);
if (tryCounter < 0) {
ALOGE("did not receive expected priority boost on time");
}
// Run again immediately
return 0;
}
// Can only reference mCblk while locked
int32_t flags = android_atomic_and(~CBLK_OVERRUN, &mCblk->mFlags);
// Check for track invalidation
if (flags & CBLK_INVALID) {
(void) restoreRecord_l("processAudioBuffer");
mLock.unlock();
// Run again immediately, but with a new IAudioRecord
return 0;
}
bool active = mActive;
// Manage overrun callback, must be done under lock to avoid race with releaseBuffer()
bool newOverrun = false;
if (flags & CBLK_OVERRUN) {
if (!mInOverrun) {
mInOverrun = true;
newOverrun = true;
}
}
// Get current position of server
size_t position = mProxy->getPosition();
// Manage marker callback
bool markerReached = false;
size_t markerPosition = mMarkerPosition;
// FIXME fails for wraparound, need 64 bits
if (!mMarkerReached && (markerPosition > 0) && (position >= markerPosition)) {
mMarkerReached = markerReached = true;
}
// Determine the number of new position callback(s) that will be needed, while locked
size_t newPosCount = 0;
size_t newPosition = mNewPosition;
uint32_t updatePeriod = mUpdatePeriod;
// FIXME fails for wraparound, need 64 bits
if (updatePeriod > 0 && position >= newPosition) {
newPosCount = ((position - newPosition) / updatePeriod) + 1;
mNewPosition += updatePeriod * newPosCount;
}
// Cache other fields that will be needed soon
uint32_t notificationFrames = mNotificationFramesAct;
if (mRefreshRemaining) {
mRefreshRemaining = false;
mRemainingFrames = notificationFrames;
mRetryOnPartialBuffer = false;
}
size_t misalignment = mProxy->getMisalignment();
uint32_t sequence = mSequence;
// These fields don't need to be cached, because they are assigned only by set():
// mTransfer, mCbf, mUserData, mSampleRate, mFrameSize
mLock.unlock();
// perform callbacks while unlocked
if (newOverrun) {
mCbf(EVENT_OVERRUN, mUserData, NULL);
}
if (markerReached) {
mCbf(EVENT_MARKER, mUserData, &markerPosition);
}
while (newPosCount > 0) {
size_t temp = newPosition;
mCbf(EVENT_NEW_POS, mUserData, &temp);
newPosition += updatePeriod;
newPosCount--;
}
if (mObservedSequence != sequence) {
mObservedSequence = sequence;
mCbf(EVENT_NEW_IAUDIORECORD, mUserData, NULL);
}
// if inactive, then don't run me again until re-started
if (!active) {
return NS_INACTIVE;
}
// Compute the estimated time until the next timed event (position, markers)
uint32_t minFrames = ~0;
if (!markerReached && position < markerPosition) {
minFrames = markerPosition - position;
}
if (updatePeriod > 0 && updatePeriod < minFrames) {
minFrames = updatePeriod;
}
// If > 0, poll periodically to recover from a stuck server. A good value is 2.
static const uint32_t kPoll = 0;
if (kPoll > 0 && mTransfer == TRANSFER_CALLBACK && kPoll * notificationFrames < minFrames) {
minFrames = kPoll * notificationFrames;
}
// Convert frame units to time units
nsecs_t ns = NS_WHENEVER;
if (minFrames != (uint32_t) ~0) {
// This "fudge factor" avoids soaking CPU, and compensates for late progress by server
static const nsecs_t kFudgeNs = 10000000LL; // 10 ms
ns = ((minFrames * 1000000000LL) / mSampleRate) + kFudgeNs;
}
// If not supplying data by EVENT_MORE_DATA, then we're done
if (mTransfer != TRANSFER_CALLBACK) {
return ns;
}
struct timespec timeout;
const struct timespec *requested = &ClientProxy::kForever;
if (ns != NS_WHENEVER) {
timeout.tv_sec = ns / 1000000000LL;
timeout.tv_nsec = ns % 1000000000LL;
ALOGV("timeout %ld.%03d", timeout.tv_sec, (int) timeout.tv_nsec / 1000000);
requested = &timeout;
}
while (mRemainingFrames > 0) {
Buffer audioBuffer;
audioBuffer.frameCount = mRemainingFrames;
size_t nonContig;
status_t err = obtainBuffer(&audioBuffer, requested, NULL, &nonContig);
LOG_ALWAYS_FATAL_IF((err != NO_ERROR) != (audioBuffer.frameCount == 0),
"obtainBuffer() err=%d frameCount=%zu", err, audioBuffer.frameCount);
requested = &ClientProxy::kNonBlocking;
size_t avail = audioBuffer.frameCount + nonContig;
ALOGV("obtainBuffer(%u) returned %zu = %zu + %zu err %d",
mRemainingFrames, avail, audioBuffer.frameCount, nonContig, err);
if (err != NO_ERROR) {
if (err == TIMED_OUT || err == WOULD_BLOCK || err == -EINTR) {
break;
}
ALOGE("Error %d obtaining an audio buffer, giving up.", err);
return NS_NEVER;
}
if (mRetryOnPartialBuffer) {
mRetryOnPartialBuffer = false;
if (avail < mRemainingFrames) {
int64_t myns = ((mRemainingFrames - avail) *
1100000000LL) / mSampleRate;
if (ns < 0 || myns < ns) {
ns = myns;
}
return ns;
}
}
size_t reqSize = audioBuffer.size;
mCbf(EVENT_MORE_DATA, mUserData, &audioBuffer);
size_t readSize = audioBuffer.size;
// Sanity check on returned size
if (ssize_t(readSize) < 0 || readSize > reqSize) {
ALOGE("EVENT_MORE_DATA requested %zu bytes but callback returned %zd bytes",
reqSize, ssize_t(readSize));
return NS_NEVER;
}
if (readSize == 0) {
// The callback is done consuming buffers
// Keep this thread going to handle timed events and
// still try to provide more data in intervals of WAIT_PERIOD_MS
// but don't just loop and block the CPU, so wait
return WAIT_PERIOD_MS * 1000000LL;
}
size_t releasedFrames = readSize / mFrameSize;
audioBuffer.frameCount = releasedFrames;
mRemainingFrames -= releasedFrames;
if (misalignment >= releasedFrames) {
misalignment -= releasedFrames;
} else {
misalignment = 0;
}
releaseBuffer(&audioBuffer);
// FIXME here is where we would repeat EVENT_MORE_DATA again on same advanced buffer
// if callback doesn't like to accept the full chunk
if (readSize < reqSize) {
continue;
}
// There could be enough non-contiguous frames available to satisfy the remaining request
if (mRemainingFrames <= nonContig) {
continue;
}
#if 0
// This heuristic tries to collapse a series of EVENT_MORE_DATA that would total to a
// sum <= notificationFrames. It replaces that series by at most two EVENT_MORE_DATA
// that total to a sum == notificationFrames.
if (0 < misalignment && misalignment <= mRemainingFrames) {
mRemainingFrames = misalignment;
return (mRemainingFrames * 1100000000LL) / mSampleRate;
}
#endif
}
mRemainingFrames = notificationFrames;
mRetryOnPartialBuffer = true;
// A lot has transpired since ns was calculated, so run again immediately and re-calculate
return 0;
}
在這個函數中的主要工作如下:
1.對於mAwaitBoost變量,我們查找一下,可以知道只有當音頻輸入標志mFlags為AUDIO_INPUT_FLAG_FAST才有可能是true,而之前分析,在這里mFlags為AUDIO_INPUT_FLAG_NONE;
2.通過mCblk->mFlags判斷是否緩沖區數據已經overrun,並存儲在flags中,然后檢查track是否失效;
{插一句,這個mActive是在AudioRecordThread線程resume之前賦值true的,然后我們回過頭來看AudioRecord::set函數的結尾,對一大堆變量進行了初始化,而這些變量大部分在這里使用到了,如mInOverrun為false,mMarkerPosition為0,mMarkerReached為false,mNewPosition為0,mUpdatePeriod為0,mSequence為1,mNotificationFramesAct是在之前通過audioFlinger->openRecord獲取到的,這里為1024}
3.檢查緩沖區數據是否已經overrun,如果出現overrun,則標記mInOverrun與newOverrun都為true,后面會通過mCbf發送EVENT_OVERRUN事件給到上一層,這個mCbf其實是JNI中的recorderCallback的回調函數;
4.判斷如果當前位置超過標記的位置了,則標記mMarkerReached與markerReached為true,后面會通過mCbf發送EVENT_MARKER事件給到上一層;
5.判斷是否需要更新mRemainingFrames,以及是否需要發送EVENT_NEW_POS以及EVENT_NEW_IAUDIORECORD事件;
6.這里的mTransfer在AudioRecord::set函數中已經分析了,為TRANSFER_SYNC,所以最后直接return NS_WHENEVER;
也就是說,在AudioRecordThread這個線程中,通過對mCblk->mFlags的判斷來更新當前緩沖區數據的狀態,EVENT_OVERRUN/EVENT_MARKER/EVENT_NEW_POS/EVENT_NEW_IAUDIORECORD等。
好了,AudioRecordThread已經分析完了,之前我們分析到Thread.cpp中的AudioFlinger::ThreadBase::sendConfigEvent_l函數調用了mWaitWorkCV.signal(),所以我們繼續分析下RecordThread線程
frameworks\av\services\audioflinger\Threads.cpp
bool AudioFlinger::RecordThread::threadLoop()
{
nsecs_t lastWarning = 0;
inputStandBy();
reacquire_wakelock:
sp<RecordTrack> activeTrack;
int activeTracksGen;
{
Mutex::Autolock _l(mLock);
size_t size = mActiveTracks.size();
activeTracksGen = mActiveTracksGen;
if (size > 0) {
// FIXME an arbitrary choice
activeTrack = mActiveTracks[0];
acquireWakeLock_l(activeTrack->uid());
if (size > 1) {
SortedVector<int> tmp;
for (size_t i = 0; i < size; i++) {
tmp.add(mActiveTracks[i]->uid());
}
updateWakeLockUids_l(tmp);
}
} else {
acquireWakeLock_l(-1);
}
}
// used to request a deferred sleep, to be executed later while mutex is unlocked
uint32_t sleepUs = 0;
// loop while there is work to do
for (;;) {
Vector< sp<EffectChain> > effectChains;
// sleep with mutex unlocked
if (sleepUs > 0) {
usleep(sleepUs);
sleepUs = 0;
}
// activeTracks accumulates a copy of a subset of mActiveTracks
Vector< sp<RecordTrack> > activeTracks;
// reference to the (first and only) active fast track
sp<RecordTrack> fastTrack;
// reference to a fast track which is about to be removed
sp<RecordTrack> fastTrackToRemove;
{ // scope for mLock
Mutex::Autolock _l(mLock);
processConfigEvents_l();
// check exitPending here because checkForNewParameters_l() and
// checkForNewParameters_l() can temporarily release mLock
if (exitPending()) {
break;
}
// if no active track(s), then standby and release wakelock
size_t size = mActiveTracks.size();
if (size == 0) {
standbyIfNotAlreadyInStandby();
// exitPending() can't become true here
releaseWakeLock_l();
ALOGV("RecordThread: loop stopping");
// go to sleep
mWaitWorkCV.wait(mLock);
ALOGV("RecordThread: loop starting");
goto reacquire_wakelock;
}
if (mActiveTracksGen != activeTracksGen) {
activeTracksGen = mActiveTracksGen;
SortedVector<int> tmp;
for (size_t i = 0; i < size; i++) {
tmp.add(mActiveTracks[i]->uid());
}
updateWakeLockUids_l(tmp);
}
bool doBroadcast = false;
for (size_t i = 0; i < size; ) {
activeTrack = mActiveTracks[i];
if (activeTrack->isTerminated()) {
if (activeTrack->isFastTrack()) {
ALOG_ASSERT(fastTrackToRemove == 0);
fastTrackToRemove = activeTrack;
}
removeTrack_l(activeTrack);
mActiveTracks.remove(activeTrack);
mActiveTracksGen++;
size--;
continue;
}
TrackBase::track_state activeTrackState = activeTrack->mState;
switch (activeTrackState) {
case TrackBase::PAUSING:
mActiveTracks.remove(activeTrack);
mActiveTracksGen++;
doBroadcast = true;
size--;
continue;
case TrackBase::STARTING_1:
sleepUs = 10000;
i++;
continue;
case TrackBase::STARTING_2:
doBroadcast = true;
mStandby = false;
activeTrack->mState = TrackBase::ACTIVE;
break;
case TrackBase::ACTIVE:
break;
case TrackBase::IDLE:
i++;
continue;
default:
LOG_ALWAYS_FATAL("Unexpected activeTrackState %d", activeTrackState);
}
activeTracks.add(activeTrack);
i++;
if (activeTrack->isFastTrack()) {
ALOG_ASSERT(!mFastTrackAvail);
ALOG_ASSERT(fastTrack == 0);
fastTrack = activeTrack;
}
}
if (doBroadcast) {
mStartStopCond.broadcast();
}
// sleep if there are no active tracks to process
if (activeTracks.size() == 0) {
if (sleepUs == 0) {
sleepUs = kRecordThreadSleepUs;
}
continue;
}
sleepUs = 0;
lockEffectChains_l(effectChains);
}
// thread mutex is now unlocked, mActiveTracks unknown, activeTracks.size() > 0
size_t size = effectChains.size();
for (size_t i = 0; i < size; i++) {
// thread mutex is not locked, but effect chain is locked
effectChains[i]->process_l();
}
// Push a new fast capture state if fast capture is not already running, or cblk change
if (mFastCapture != 0) {
FastCaptureStateQueue *sq = mFastCapture->sq();
FastCaptureState *state = sq->begin();
bool didModify = false;
FastCaptureStateQueue::block_t block = FastCaptureStateQueue::BLOCK_UNTIL_PUSHED;
if (state->mCommand != FastCaptureState::READ_WRITE /* FIXME &&
(kUseFastMixer != FastMixer_Dynamic || state->mTrackMask > 1)*/) {
if (state->mCommand == FastCaptureState::COLD_IDLE) {
int32_t old = android_atomic_inc(&mFastCaptureFutex);
if (old == -1) {
(void) syscall(__NR_futex, &mFastCaptureFutex, FUTEX_WAKE_PRIVATE, 1);
}
}
state->mCommand = FastCaptureState::READ_WRITE;
#if 0 // FIXME
mFastCaptureDumpState.increaseSamplingN(mAudioFlinger->isLowRamDevice() ?
FastCaptureDumpState::kSamplingNforLowRamDevice : FastMixerDumpState::kSamplingN);
#endif
didModify = true;
}
audio_track_cblk_t *cblkOld = state->mCblk;
audio_track_cblk_t *cblkNew = fastTrack != 0 ? fastTrack->cblk() : NULL;
if (cblkNew != cblkOld) {
state->mCblk = cblkNew;
// block until acked if removing a fast track
if (cblkOld != NULL) {
block = FastCaptureStateQueue::BLOCK_UNTIL_ACKED;
}
didModify = true;
}
sq->end(didModify);
if (didModify) {
sq->push(block);
#if 0
if (kUseFastCapture == FastCapture_Dynamic) {
mNormalSource = mPipeSource;
}
#endif
}
}
// now run the fast track destructor with thread mutex unlocked
fastTrackToRemove.clear();
// Read from HAL to keep up with fastest client if multiple active tracks, not slowest one.
// Only the client(s) that are too slow will overrun. But if even the fastest client is too
// slow, then this RecordThread will overrun by not calling HAL read often enough.
// If destination is non-contiguous, first read past the nominal end of buffer, then
// copy to the right place. Permitted because mRsmpInBuffer was over-allocated.
int32_t rear = mRsmpInRear & (mRsmpInFramesP2 - 1);
ssize_t framesRead;
// If an NBAIO source is present, use it to read the normal capture's data
if (mPipeSource != 0) {
size_t framesToRead = mBufferSize / mFrameSize;
framesRead = mPipeSource->read(&mRsmpInBuffer[rear * mChannelCount],
framesToRead, AudioBufferProvider::kInvalidPTS);
if (framesRead == 0) {
// since pipe is non-blocking, simulate blocking input
sleepUs = (framesToRead * 1000000LL) / mSampleRate;
}
// otherwise use the HAL / AudioStreamIn directly
} else {
ssize_t bytesRead = mInput->stream->read(mInput->stream,
&mRsmpInBuffer[rear * mChannelCount], mBufferSize);
if (bytesRead < 0) {
framesRead = bytesRead;
} else {
framesRead = bytesRead / mFrameSize;
}
}
if (framesRead < 0 || (framesRead == 0 && mPipeSource == 0)) {
ALOGE("read failed: framesRead=%d", framesRead);
// Force input into standby so that it tries to recover at next read attempt
inputStandBy();
sleepUs = kRecordThreadSleepUs;
}
if (framesRead <= 0) {
goto unlock;
}
ALOG_ASSERT(framesRead > 0);
if (mTeeSink != 0) {
(void) mTeeSink->write(&mRsmpInBuffer[rear * mChannelCount], framesRead);
}
// If destination is non-contiguous, we now correct for reading past end of buffer.
{
size_t part1 = mRsmpInFramesP2 - rear;
if ((size_t) framesRead > part1) {
memcpy(mRsmpInBuffer, &mRsmpInBuffer[mRsmpInFramesP2 * mChannelCount],
(framesRead - part1) * mFrameSize);
}
}
rear = mRsmpInRear += framesRead;
size = activeTracks.size();
// loop over each active track
for (size_t i = 0; i < size; i++) {
activeTrack = activeTracks[i];
// skip fast tracks, as those are handled directly by FastCapture
if (activeTrack->isFastTrack()) {
continue;
}
enum {
OVERRUN_UNKNOWN,
OVERRUN_TRUE,
OVERRUN_FALSE
} overrun = OVERRUN_UNKNOWN;
// loop over getNextBuffer to handle circular sink
for (;;) {
activeTrack->mSink.frameCount = ~0;
status_t status = activeTrack->getNextBuffer(&activeTrack->mSink);
size_t framesOut = activeTrack->mSink.frameCount;
LOG_ALWAYS_FATAL_IF((status == OK) != (framesOut > 0));
int32_t front = activeTrack->mRsmpInFront;
ssize_t filled = rear - front;
size_t framesIn;
if (filled < 0) {
// should not happen, but treat like a massive overrun and re-sync
framesIn = 0;
activeTrack->mRsmpInFront = rear;
overrun = OVERRUN_TRUE;
} else if ((size_t) filled <= mRsmpInFrames) {
framesIn = (size_t) filled;
} else {
// client is not keeping up with server, but give it latest data
framesIn = mRsmpInFrames;
activeTrack->mRsmpInFront = front = rear - framesIn;
overrun = OVERRUN_TRUE;
}
if (framesOut == 0 || framesIn == 0) {
break;
}
if (activeTrack->mResampler == NULL) {
// no resampling
if (framesIn > framesOut) {
framesIn = framesOut;
} else {
framesOut = framesIn;
}
int8_t *dst = activeTrack->mSink.i8;
while (framesIn > 0) {
front &= mRsmpInFramesP2 - 1;
size_t part1 = mRsmpInFramesP2 - front;
if (part1 > framesIn) {
part1 = framesIn;
}
int8_t *src = (int8_t *)mRsmpInBuffer + (front * mFrameSize);
if (mChannelCount == activeTrack->mChannelCount) {
memcpy(dst, src, part1 * mFrameSize);
} else if (mChannelCount == 1) {
upmix_to_stereo_i16_from_mono_i16((int16_t *)dst, (const int16_t *)src,
part1);
} else {
downmix_to_mono_i16_from_stereo_i16((int16_t *)dst, (const int16_t *)src,
part1);
}
dst += part1 * activeTrack->mFrameSize;
front += part1;
framesIn -= part1;
}
activeTrack->mRsmpInFront += framesOut;
} else {
// resampling
// FIXME framesInNeeded should really be part of resampler API, and should
// depend on the SRC ratio
// to keep mRsmpInBuffer full so resampler always has sufficient input
size_t framesInNeeded;
// FIXME only re-calculate when it changes, and optimize for common ratios
// Do not precompute in/out because floating point is not associative
// e.g. a*b/c != a*(b/c).
const double in(mSampleRate);
const double out(activeTrack->mSampleRate);
framesInNeeded = ceil(framesOut * in / out) + 1;
ALOGV("need %u frames in to produce %u out given in/out ratio of %.4g",
framesInNeeded, framesOut, in / out);
// Although we theoretically have framesIn in circular buffer, some of those are
// unreleased frames, and thus must be discounted for purpose of budgeting.
size_t unreleased = activeTrack->mRsmpInUnrel;
framesIn = framesIn > unreleased ? framesIn - unreleased : 0;
if (framesIn < framesInNeeded) {
ALOGV("not enough to resample: have %u frames in but need %u in to "
"produce %u out given in/out ratio of %.4g",
framesIn, framesInNeeded, framesOut, in / out);
size_t newFramesOut = framesIn > 0 ? floor((framesIn - 1) * out / in) : 0;
LOG_ALWAYS_FATAL_IF(newFramesOut >= framesOut);
if (newFramesOut == 0) {
break;
}
framesInNeeded = ceil(newFramesOut * in / out) + 1;
ALOGV("now need %u frames in to produce %u out given out/in ratio of %.4g",
framesInNeeded, newFramesOut, out / in);
LOG_ALWAYS_FATAL_IF(framesIn < framesInNeeded);
ALOGV("success 2: have %u frames in and need %u in to produce %u out "
"given in/out ratio of %.4g",
framesIn, framesInNeeded, newFramesOut, in / out);
framesOut = newFramesOut;
} else {
ALOGV("success 1: have %u in and need %u in to produce %u out "
"given in/out ratio of %.4g",
framesIn, framesInNeeded, framesOut, in / out);
}
// reallocate mRsmpOutBuffer as needed; we will grow but never shrink
if (activeTrack->mRsmpOutFrameCount < framesOut) {
// FIXME why does each track need it's own mRsmpOutBuffer? can't they share?
delete[] activeTrack->mRsmpOutBuffer;
// resampler always outputs stereo
activeTrack->mRsmpOutBuffer = new int32_t[framesOut * FCC_2];
activeTrack->mRsmpOutFrameCount = framesOut;
}
// resampler accumulates, but we only have one source track
memset(activeTrack->mRsmpOutBuffer, 0, framesOut * FCC_2 * sizeof(int32_t));
activeTrack->mResampler->resample(activeTrack->mRsmpOutBuffer, framesOut,
// FIXME how about having activeTrack implement this interface itself?
activeTrack->mResamplerBufferProvider
/*this*/ /* AudioBufferProvider* */);
// ditherAndClamp() works as long as all buffers returned by
// activeTrack->getNextBuffer() are 32 bit aligned which should be always true.
if (activeTrack->mChannelCount == 1) {
// temporarily type pun mRsmpOutBuffer from Q4.27 to int16_t
ditherAndClamp(activeTrack->mRsmpOutBuffer, activeTrack->mRsmpOutBuffer,
framesOut);
// the resampler always outputs stereo samples:
// do post stereo to mono conversion
downmix_to_mono_i16_from_stereo_i16(activeTrack->mSink.i16,
(const int16_t *)activeTrack->mRsmpOutBuffer, framesOut);
} else {
ditherAndClamp((int32_t *)activeTrack->mSink.raw,
activeTrack->mRsmpOutBuffer, framesOut);
}
// now done with mRsmpOutBuffer
}
if (framesOut > 0 && (overrun == OVERRUN_UNKNOWN)) {
overrun = OVERRUN_FALSE;
}
if (activeTrack->mFramesToDrop == 0) {
if (framesOut > 0) {
activeTrack->mSink.frameCount = framesOut;
activeTrack->releaseBuffer(&activeTrack->mSink);
}
} else {
// FIXME could do a partial drop of framesOut
if (activeTrack->mFramesToDrop > 0) {
activeTrack->mFramesToDrop -= framesOut;
if (activeTrack->mFramesToDrop <= 0) {
activeTrack->clearSyncStartEvent();
}
} else {
activeTrack->mFramesToDrop += framesOut;
if (activeTrack->mFramesToDrop >= 0 || activeTrack->mSyncStartEvent == 0 ||
activeTrack->mSyncStartEvent->isCancelled()) {
ALOGW("Synced record %s, session %d, trigger session %d",
(activeTrack->mFramesToDrop >= 0) ? "timed out" : "cancelled",
activeTrack->sessionId(),
(activeTrack->mSyncStartEvent != 0) ?
activeTrack->mSyncStartEvent->triggerSession() : 0);
activeTrack->clearSyncStartEvent();
}
}
}
if (framesOut == 0) {
break;
}
}
ALOGE("pngcui - end for(;;)");
switch (overrun) {
case OVERRUN_TRUE:
// client isn't retrieving buffers fast enough
if (!activeTrack->setOverflow()) {
nsecs_t now = systemTime();
// FIXME should lastWarning per track?
if ((now - lastWarning) > kWarningThrottleNs) {
ALOGW("RecordThread: buffer overflow");
lastWarning = now;
}
}
break;
case OVERRUN_FALSE:
activeTrack->clearOverflow();
break;
case OVERRUN_UNKNOWN:
break;
}
}
unlock:
// enable changes in effect chain
unlockEffectChains(effectChains);
// effectChains doesn't need to be cleared, since it is cleared by destructor at scope end
}
standbyIfNotAlreadyInStandby();
{
Mutex::Autolock _l(mLock);
for (size_t i = 0; i < mTracks.size(); i++) {
sp<RecordTrack> track = mTracks[i];
track->invalidate();
}
mActiveTracks.clear();
mActiveTracksGen++;
mStartStopCond.broadcast();
}
releaseWakeLock();
ALOGV("RecordThread %p exiting", this);
return false;
}
在這個線程中主要的工作如下:
這個線程其實早在AudioRecord::set函數中就創建好了的,只是一直阻塞在mWaitWorkCV.wait(mLock);中等待mWaitWorkCV的signal,然后再回到reacquire_wakelock位置;
1.這里注意下,在線程啟動的時候,調用了inputStandBy方法,他最終會調用mInput->stream->common.standby,但是in->standby初始值為1,所以在hal層中並沒有做實質上的工作;
2.調用processConfigEvents_l函數,判斷mConfigEvents中是否有事件,若有則向外發送mConfigEvents中的事件;
3.獲取mActiveTracks的個數,回顧一下,在Threads.cpp中調用AudioFlinger::RecordThread::start方法時候,會把創建好的RecordThread加入到mActiveTracks中,所以這里的activeTrack就是之前創建好的RecordThread對象了;
4.既然知道mActiveTracks中已經不為null了,所以for循環就不會再進入到mWaitWorkCV.wait中等待了,要真正的開始干活了;
5.從mActiveTracks獲取到在RecordThread的start方法中加進去的activeTrack;
6.還記得之前在RecordThread的start方法的時候留的一個懸念嗎,這里就揭曉答案,在add到mActiveTracks的時候,mState為STARTING_1,所以這里肯定是STARTING_1,設置sleepUs為10ms后continue,我們再回到for循環開頭,這里usleep了!!!我們之前分析到在RecordThread的start方法跑完了的時候才會更新mState為STARTING_2,所以RecordThread::threadLoop也就是在等待RecordThread的start方法跑完,否則會一直在sleep中;
7.標記doBroadcast為true,mStandby為false,mState為ACTIVE了;
8.把activeTrack對象拷貝到activeTracks集合中,然后調用mStartStopCond.broadcast(),這個廣播注意下,肯定在后面的代碼中有作用;
9.判斷是否有音效控制,如有則對該音效做process_l處理;
10.獲取當前AudioBuffer緩沖區的位置rear,這里mRsmpInFrames是mFrameCount * 7,即1024*7,和mRsmpInFramesP2是roundup(mRsmpInFrames),即1024*8;
11.如果是FastCapture方式的話,則調用PipeSource->read去獲取數據,否則直接調用HAL層的接口mInput->stream->read獲取數據保存到mRsmpInBuffer中,這里采用后者,每次讀取2048個字節的數據;
12.如果獲取失敗的話,強制設置HAL層的standby狀態為1,然后休眠一會,再重新開啟read,具體操作會在hal層中的read函數中體現;
13.獲取AudioBuffer剩下的大小part1,如果剩下的大小不足以存下read出來的數據,則把超出的數據拷貝到AudioBuffer環形緩沖區的頭部地方,糾正讀取緩沖區末尾的錯誤;
14.更新rear與mRsmpInRear,向后推進framesRead個單位,而framesRead是bytesRead / mFrameSize,而mFrameSize是audio_stream_in_frame_size獲取到的(AudioFlinger::RecordThread::readInputParameters_l()中);
15.調用activeTrack->getNextBuffer獲取下一個buffer;
16.判斷當前Buffer中已經填充了多少數據:filled,如果已經寫滿則標記OVERRUN_TRUE,如果framesOut或者framesIn為0(這個framesOut是緩沖區中的available frames,如果緩沖區overrun的話,肯定就是0了),則不繼續下面了,直接break;
17.判斷是否需要重采樣,這里不需要重采樣
1.不進行重采樣了的話,就直接通過memcpy把mRsmpInBuffer數據拷貝到activeTrack->mSink.i8中,也就是通過getNextBuffer獲取到的那塊buffer;
2.需要重采樣的話,會調用activeTrack->mResampler->resample進行重采樣之后,通過ditherAndClamp把數據拷貝到activeTrack->mSink中;
18.判斷下當前overrun的狀態,如果出現OVERRUN的情況,則調用setOverflow設置mOverflow為true,此時說明應用端讀取數據的速度不夠快,但是依舊會提供最新的pcm數據,所以如果出現了音頻播放時出現跳音,可以排查下這里。
這里分析下第11步:mInput->stream->read以及第15步:activeTrack->getNextBuffer
首先分析下第15步:activeTrack->getNextBuffer,獲取下一個buffer,因為我們之前就了解AudioBuffer的管理方式,他有一個環形緩沖區,現在這里就是一直在讀取底層的數據,他不會在乎應用層有沒有去獲取我read出來的數據,所以這里就有一個問題,RecordThread線程read出來的數據是怎么寫到緩沖區的呢,和后面的AudioRecord.java中read函數去進行交互的。
frameworks\av\services\audioflinger\Tracks.cpp
status_t AudioFlinger::RecordThread::RecordTrack::getNextBuffer(AudioBufferProvider::Buffer* buffer,
int64_t pts __unused)
{
ServerProxy::Buffer buf;
buf.mFrameCount = buffer->frameCount;
status_t status = mServerProxy->obtainBuffer(&buf);
buffer->frameCount = buf.mFrameCount;
buffer->raw = buf.mRaw;
if (buf.mFrameCount == 0) {
// FIXME also wake futex so that overrun is noticed more quickly
(void) android_atomic_or(CBLK_OVERRUN, &mCblk->mFlags);
}
return status;
}
繼續調用mServerProxy->obtainBuffer獲取buf,這個raw就是指向緩沖區的那塊共享內存,如果緩沖區填滿了的話,則設置mCblk->mFlags為CBLK_OVERRUN
frameworks\av\media\libmedia\AudioTrackShared.cpp
status_t ServerProxy::obtainBuffer(Buffer* buffer, bool ackFlush)
{
LOG_ALWAYS_FATAL_IF(buffer == NULL || buffer->mFrameCount == 0);
if (mIsShutdown) {
goto no_init;
}
{
audio_track_cblk_t* cblk = mCblk;
// compute number of frames available to write (AudioTrack) or read (AudioRecord),
// or use previous cached value from framesReady(), with added barrier if it omits.
int32_t front;
int32_t rear;
// See notes on barriers at ClientProxy::obtainBuffer()
if (mIsOut) {
int32_t flush = cblk->u.mStreaming.mFlush;
rear = android_atomic_acquire_load(&cblk->u.mStreaming.mRear);
front = cblk->u.mStreaming.mFront;
if (flush != mFlush) {
// effectively obtain then release whatever is in the buffer
size_t mask = (mFrameCountP2 << 1) - 1;
int32_t newFront = (front & ~mask) | (flush & mask);
ssize_t filled = rear - newFront;
// Rather than shutting down on a corrupt flush, just treat it as a full flush
if (!(0 <= filled && (size_t) filled <= mFrameCount)) {
ALOGE("mFlush %#x -> %#x, front %#x, rear %#x, mask %#x, newFront %#x, filled %d=%#x",
mFlush, flush, front, rear, mask, newFront, filled, filled);
newFront = rear;
}
mFlush = flush;
android_atomic_release_store(newFront, &cblk->u.mStreaming.mFront);
// There is no danger from a false positive, so err on the side of caution
if (true /*front != newFront*/) {
int32_t old = android_atomic_or(CBLK_FUTEX_WAKE, &cblk->mFutex);
if (!(old & CBLK_FUTEX_WAKE)) {
(void) syscall(__NR_futex, &cblk->mFutex,
mClientInServer ? FUTEX_WAKE_PRIVATE : FUTEX_WAKE, 1);
}
}
front = newFront;
}
} else {
front = android_atomic_acquire_load(&cblk->u.mStreaming.mFront);
rear = cblk->u.mStreaming.mRear;
}
ssize_t filled = rear - front;
// pipe should not already be overfull
if (!(0 <= filled && (size_t) filled <= mFrameCount)) {
ALOGE("Shared memory control block is corrupt (filled=%zd); shutting down", filled);
mIsShutdown = true;
}
if (mIsShutdown) {
goto no_init;
}
// don't allow filling pipe beyond the nominal size
size_t availToServer;
if (mIsOut) {
availToServer = filled;
mAvailToClient = mFrameCount - filled;
} else {
availToServer = mFrameCount - filled;
mAvailToClient = filled;
}
// 'availToServer' may be non-contiguous, so return only the first contiguous chunk
size_t part1;
if (mIsOut) {
front &= mFrameCountP2 - 1;
part1 = mFrameCountP2 - front;
} else {
rear &= mFrameCountP2 - 1;
part1 = mFrameCountP2 - rear;
}
if (part1 > availToServer) {
part1 = availToServer;
}
size_t ask = buffer->mFrameCount;
if (part1 > ask) {
part1 = ask;
}
// is assignment redundant in some cases?
buffer->mFrameCount = part1;
buffer->mRaw = part1 > 0 ?
&((char *) mBuffers)[(mIsOut ? front : rear) * mFrameSize] : NULL;
buffer->mNonContig = availToServer - part1;
// After flush(), allow releaseBuffer() on a previously obtained buffer;
// see "Acknowledge any pending flush()" in audioflinger/Tracks.cpp.
if (!ackFlush) {
mUnreleased = part1;
}
return part1 > 0 ? NO_ERROR : WOULD_BLOCK;
}
no_init:
buffer->mFrameCount = 0;
buffer->mRaw = NULL;
buffer->mNonContig = 0;
mUnreleased = 0;
return NO_INIT;
}
這個函數中主要的工作如下:
1.獲取mCblk,這里需要回憶下,mCblk是在AudioRecord::openRecord_l中更新的,即sp<IMemory> iMem通過audioFlinger->openRecord獲取到共享內存,然后mCblk=iMem->pointer(),所以實際是在AudioFlinger::openRecord函數中獲取到的iMem = cblk = recordTrack->getCblk();
frameworks\av\services\audioflinger\TrackBase.h
sp<IMemory> getCblk() const { return mCblkMemory; }
audio_track_cblk_t* cblk() const { return mCblk; }
sp<IMemory> getBuffers() const { return mBufferMemory; }
2.獲取mCblk中的front、rear,然后計算出filled;
3.計算出緩沖區中的available frames,然后保存到mFrameCount中;
4.計算出下一塊buf的地址,保存到mRaw中;
這里就完成把read出來的數據寫入到相應的共享內存,即環形緩沖區中了。
然后再繼續簡單分析下第12步:調用HAL層的read函數
hardware\aw\audio\tulip\audio_hw.c
static ssize_t in_read(struct audio_stream_in *stream, void* buffer,
size_t bytes)
{
int ret = 0;
struct sunxi_stream_in *in = (struct sunxi_stream_in *)stream;
struct sunxi_audio_device *adev = in->dev;
size_t frames_rq = bytes / audio_stream_frame_size(&stream->common);
if (adev->mode == AUDIO_MODE_IN_CALL) {
memset(buffer, 0, bytes);
}
/* acquiring hw device mutex systematically is useful if a low priority thread is waiting
* on the input stream mutex - e.g. executing select_mode() while holding the hw device
* mutex
*/
if (adev->af_capture_flag && adev->PcmManager.BufExist) {
pthread_mutex_lock(&adev->lock);
pthread_mutex_lock(&in->lock);
if (in->standby) {
in->standby = 0;
}
pthread_mutex_unlock(&adev->lock);
if (ret < 0)
goto exit;
ret = ReadPcmData(buffer, bytes, &adev->PcmManager);
if (ret > 0)
ret = 0;
if (ret == 0 && adev->mic_mute)
memset(buffer, 0, bytes);
pthread_mutex_unlock(&in->lock);
return bytes;
}
pthread_mutex_lock(&adev->lock);
pthread_mutex_lock(&in->lock);
if (in->standby) {
ret = start_input_stream(in);
if (ret == 0)
in->standby = 0;
}
pthread_mutex_unlock(&adev->lock);
if (ret < 0)
goto exit;
if (in->num_preprocessors != 0) {
ret = read_frames(in, buffer, frames_rq);
} else if (in->resampler != NULL) {
ret = read_frames(in, buffer, frames_rq);
} else {
ret = pcm_read(in->pcm, buffer, bytes);
}
if (ret > 0)
ret = 0;
if (ret == 0 && adev->mic_mute)
memset(buffer, 0, bytes);
exit:
if (ret < 0)
usleep(bytes * 1000000 / audio_stream_frame_size(&stream->common) /
in_get_sample_rate(&stream->common));
pthread_mutex_unlock(&in->lock);
return bytes;
}
這里就直接調用到了HAL層中的in_read函數,這個函數一般soc廠家不一樣,實現也不一樣,這里做簡要介紹
1.如果當前輸入流的standby為true的時候,也就是第一次read時,調用start_input_stream函數去打開mic設備節點;
2.如果stream_in的num_preprocessors或者resampler有數據的時候,則調用read_frames函數獲取數據,否則直接調用pcm_read獲取。其實他們最終都是調用的pcm_read去獲取數據的,這個函數是tinyalsa架構提供的,這個庫的實現源碼位置:external\tinyalsa\,我們在測試音頻的時候一般也是通過這幾個程序去測試的;
3.把讀取到的數據喂給buffer,最后休息一下;
這里再繼續分析下start_input_stream函數
static int start_input_stream(struct sunxi_stream_in *in)
{
int ret = 0;
int in_ajust_rate = 0;
struct sunxi_audio_device *adev = in->dev;
adev->active_input = in;
F_LOG;
adev->in_device = in->device;
select_device(adev);
if (in->need_echo_reference && in->echo_reference == NULL)
in->echo_reference = get_echo_reference(adev,
AUDIO_FORMAT_PCM_16_BIT,
in->config.channels,
in->requested_rate);
in_ajust_rate = in->requested_rate;
ALOGD(">>>>>> in_ajust_rate is : %d", in_ajust_rate);
// out/in stream should be both 44.1K serial
switch(CASE_NAME){
case 0 :
case 1 :
in_ajust_rate = SAMPLING_RATE_44K;
if((adev->mode == AUDIO_MODE_IN_CALL) && (adev->out_device == AUDIO_DEVICE_OUT_BLUETOOTH_SCO) ){
in_ajust_rate = SAMPLING_RATE_8K;
}
if((adev->mode == AUDIO_MODE_IN_COMMUNICATION) && (adev->out_device == AUDIO_DEVICE_OUT_BLUETOOTH_SCO_HEADSET) ){
in_ajust_rate = SAMPLING_RATE_8K;
}
break;
case 2 :
if(adev->mode == AUDIO_MODE_IN_CALL)
in_ajust_rate = in->requested_rate;
else
in_ajust_rate = SAMPLING_RATE_44K;
default :
break;
}
if (adev->mode == AUDIO_MODE_IN_CALL)
in->pcm = pcm_open(0, PORT_VIR_CODEC, PCM_IN, &in->config);
else
in->pcm = pcm_open(0, PORT_CODEC, PCM_IN, &in->config);
if (!pcm_is_ready(in->pcm)) {
ALOGE("cannot open pcm_in driver: %s", pcm_get_error(in->pcm));
pcm_close(in->pcm);
adev->active_input = NULL;
return -ENOMEM;
}
if (in->requested_rate != in->config.rate) {
in->buf_provider.get_next_buffer = get_next_buffer;
in->buf_provider.release_buffer = release_buffer;
ret = create_resampler(in->config.rate,
in->requested_rate,
in->config.channels,
RESAMPLER_QUALITY_DEFAULT,
&in->buf_provider,
&in->resampler);
if (ret != 0) {
ALOGE("create in resampler failed, %d -> %d", in->config.rate, in->requested_rate);
ret = -EINVAL;
goto err;
}
ALOGV("create in resampler OK, %d -> %d", in->config.rate, in->requested_rate);
}
else
{
ALOGV("do not use in resampler");
}
/* if no supported sample rate is available, use the resampler */
if (in->resampler) {
in->resampler->reset(in->resampler);
in->frames_in = 0;
}
PLOGV("audio_hw::read end!!!");
return 0;
err:
if (in->resampler) {
release_resampler(in->resampler);
}
return -1;
}
這個函數的主要工作包括:
1.調用select_device去選擇一個輸入設備;
2.調用pcm_open函數打開輸入設備的節點;
再繼續看下select_device函數
static void select_device(struct sunxi_audio_device *adev)
{
int ret = -1;
int output_device_id = 0;
int input_device_id = 0;
const char *output_route = NULL;
const char *input_route = NULL;
const char *phone_route = NULL;
int earpiece_on=0, headset_on=0, headphone_on=0, bt_on=0, speaker_on=0;
int main_mic_on = 0,sub_mic_on = 0;
int bton_temp = 0;
if(!adev->ar)
return;
audio_route_reset(adev->ar);
audio_route_update_mixer_old_value(adev->ar);
if(spk_dul_used)
audio_route_apply_path(adev->ar, "media-speaker-off");
else
audio_route_apply_path(adev->ar, "media-single-speaker-off");
if (adev->mode == AUDIO_MODE_IN_CALL){
if(CASE_NAME <= 0){
ALOGV("%s,PHONE CASE ERR!!!!!!!!!!!!!!!!!!!! line: %d,CASE_NAME:%d", __FUNCTION__, __LINE__,CASE_NAME);
//return CASE_NAME;
}
headset_on = adev->out_device & AUDIO_DEVICE_OUT_WIRED_HEADSET; // hp4p
headphone_on = adev->out_device & AUDIO_DEVICE_OUT_WIRED_HEADPHONE; // hp3p
speaker_on = adev->out_device & AUDIO_DEVICE_OUT_SPEAKER;
earpiece_on = adev->out_device & AUDIO_DEVICE_OUT_EARPIECE;
bt_on = adev->out_device & AUDIO_DEVICE_OUT_ALL_SCO;
//audio_route_reset(adev->ar);
ALOGV("****LINE:%d,FUNC:%s, headset_on:%d, headphone_on:%d, speaker_on:%d, earpiece_on:%d, bt_on:%d",__LINE__,__FUNCTION__, headphone_on, headphone_on, speaker_on, earpiece_on, bt_on);
if (last_call_path_is_bt && !bt_on) {
end_bt_call(adev);
last_call_path_is_bt = 0;
}
if ((headset_on || headphone_on) && speaker_on){
output_device_id = OUT_DEVICE_SPEAKER_AND_HEADSET;
} else if (earpiece_on) {
F_LOG;
if (NO_EARPIECE)
{
F_LOG;
if(spk_dul_used){
output_device_id = OUT_DEVICE_SPEAKER;
}else{
output_device_id = OUT_DEVICE_SINGLE_SPEAKER;
}
}
else
{F_LOG;
output_device_id = OUT_DEVICE_EARPIECE;
}
} else if (headset_on) {
output_device_id = OUT_DEVICE_HEADSET;
} else if (headphone_on){
output_device_id = OUT_DEVICE_HEADPHONES;
}else if(bt_on){
bton_temp = 1;
//bt_start_call(adev);
//last_call_path_is_bt = 1;
output_device_id = OUT_DEVICE_BT_SCO;
}else if(speaker_on){
if(spk_dul_used){
output_device_id = OUT_DEVICE_SPEAKER;
}else{
output_device_id = OUT_DEVICE_SINGLE_SPEAKER;
}
}
ALOGV("****** output_id is : %d", output_device_id);
phone_route = phone_route_configs[CASE_NAME-1][output_device_id];
set_incall_device(adev);
}
if (adev->active_output) {
ALOGV("active_output, ****LINE:%d,FUNC:%s, adev->out_device:%d",__LINE__,__FUNCTION__, adev->out_device);
headset_on = adev->out_device & AUDIO_DEVICE_OUT_WIRED_HEADSET; // hp4p
headphone_on = adev->out_device & AUDIO_DEVICE_OUT_WIRED_HEADPHONE; // hp3p
speaker_on = adev->out_device & AUDIO_DEVICE_OUT_SPEAKER;
earpiece_on = adev->out_device & AUDIO_DEVICE_OUT_EARPIECE;
bt_on = adev->out_device & AUDIO_DEVICE_OUT_ALL_SCO;
//audio_route_reset(adev->ar);
ALOGV("****LINE:%d,FUNC:%s, headset_on:%d, headphone_on:%d, speaker_on:%d, earpiece_on:%d, bt_on:%d",__LINE__,__FUNCTION__, headset_on, headphone_on, speaker_on, earpiece_on, bt_on);
if ((headset_on || headphone_on) && speaker_on){
output_device_id = OUT_DEVICE_SPEAKER_AND_HEADSET;
} else if (earpiece_on) {
if (NO_EARPIECE){
if(spk_dul_used){
output_device_id = OUT_DEVICE_SPEAKER;
}else{
output_device_id = OUT_DEVICE_SINGLE_SPEAKER;
}
}
else
output_device_id = OUT_DEVICE_EARPIECE;
//output_device_id = OUT_DEVICE_EARPIECE;
} else if (headset_on) {
output_device_id = OUT_DEVICE_HEADSET;
} else if (headphone_on){
output_device_id = OUT_DEVICE_HEADSET;
}else if(bt_on){
output_device_id = OUT_DEVICE_BT_SCO;
}else if(speaker_on){
if(spk_dul_used){
output_device_id = OUT_DEVICE_SPEAKER;
}else{
output_device_id = OUT_DEVICE_SINGLE_SPEAKER;
}
}
ALOGV("****LINE:%d,FUNC:%s, output_device_id:%d",__LINE__,__FUNCTION__, output_device_id);
switch (adev->mode){
case AUDIO_MODE_NORMAL:
ALOGV("NORMAL mode, ****LINE:%d,FUNC:%s, adev->out_device:%d",__LINE__,__FUNCTION__, adev->out_device);
#if 0
if(sysopen_music())
output_device_id = OUT_DEVICE_HEADSET;
else
output_device_id = OUT_DEVICE_SPEAKER;
//output_device_id = OUT_DEVICE_HEADSET;
#endif
output_route = normal_route_configs[output_device_id];
break;
case AUDIO_MODE_RINGTONE:
ALOGV("RINGTONE mode, ****LINE:%d,FUNC:%s, adev->out_device:%d",__LINE__,__FUNCTION__, adev->out_device);
output_route = ringtone_route_configs[output_device_id];
break;
case AUDIO_MODE_FM:
break;
case AUDIO_MODE_MODE_FACTORY_TEST:
break;
case AUDIO_MODE_IN_CALL:
ALOGV("IN_CALL mode, ****LINE:%d,FUNC:%s, adev->out_device:%d",__LINE__,__FUNCTION__, adev->out_device);
output_route = phone_keytone_route_configs[CASE_NAME-1][output_device_id];
break;
case AUDIO_MODE_IN_COMMUNICATION:
output_route = normal_route_configs[output_device_id];
F_LOG;
if (output_device_id == OUT_DEVICE_BT_SCO && !last_communication_is_bt) {
F_LOG;
/* Open modem PCM channels */
if (adev->pcm_modem_dl == NULL) {
adev->pcm_modem_dl = pcm_open(0, 4, PCM_OUT, &pcm_config_vx);
if (!pcm_is_ready(adev->pcm_modem_dl)) {
ALOGE("cannot open PCM modem DL stream: %s", pcm_get_error(adev->pcm_modem_dl));
//goto err_open_dl;
}
}
if (adev->pcm_modem_ul == NULL) {
adev->pcm_modem_ul = pcm_open(0, 4, PCM_IN, &pcm_config_vx);
if (!pcm_is_ready(adev->pcm_modem_ul)) {
ALOGE("cannot open PCM modem UL stream: %s", pcm_get_error(adev->pcm_modem_ul));
//goto err_open_ul;
}
}
/* Open bt PCM channels */
if (adev->pcm_bt_dl == NULL) {
adev->pcm_bt_dl = pcm_open(0, PORT_bt, PCM_OUT, &pcm_config_vx);
if (!pcm_is_ready(adev->pcm_bt_dl)) {
ALOGE("cannot open PCM bt DL stream: %s", pcm_get_error(adev->pcm_bt_dl));
//goto err_open_bt_dl;
}
}
if (adev->pcm_bt_ul == NULL) {
adev->pcm_bt_ul = pcm_open(0, PORT_bt, PCM_IN, &pcm_config_vx);
if (!pcm_is_ready(adev->pcm_bt_ul)) {
ALOGE("cannot open PCM bt UL stream: %s", pcm_get_error(adev->pcm_bt_ul));
//goto err_open_bt_ul;
}
}
pcm_start(adev->pcm_modem_dl);
pcm_start(adev->pcm_modem_ul);
pcm_start(adev->pcm_bt_dl);
pcm_start(adev->pcm_bt_ul);
last_communication_is_bt = true;
}
break;
default:
break;
}
}
if (adev->active_input) {
if(adev->out_device & AUDIO_DEVICE_OUT_ALL_SCO){
adev->in_device = AUDIO_DEVICE_IN_BLUETOOTH_SCO_HEADSET;
}
int bt_on = adev->in_device & AUDIO_DEVICE_IN_ALL_SCO;
ALOGV("record,****LINE:%d,FUNC:%s, adev->in_device:%x,AUDIO_DEVICE_IN_ALL_SCO:%x",__LINE__,__FUNCTION__, adev->in_device,AUDIO_DEVICE_IN_ALL_SCO);
if (!bt_on) {
if ((adev->mode != AUDIO_MODE_IN_CALL) && (adev->active_input != 0)) {
/* sub mic is used for camcorder or VoIP on speaker phone */
sub_mic_on = (adev->active_input->source == AUDIO_SOURCE_CAMCORDER) ||
((adev->out_device & AUDIO_DEVICE_OUT_SPEAKER) &&
(adev->active_input->source == AUDIO_SOURCE_VOICE_COMMUNICATION));
}
if (!sub_mic_on) {
headset_on = adev->in_device & AUDIO_DEVICE_IN_WIRED_HEADSET;
main_mic_on = adev->in_device & AUDIO_DEVICE_IN_BUILTIN_MIC;
}
}
if (headset_on){
input_device_id = IN_SOURCE_HEADSETMIC;
} else if (main_mic_on) {
input_device_id = IN_SOURCE_MAINMIC;
}else if (bt_on && (adev->mode == AUDIO_MODE_IN_COMMUNICATION || adev->mode == AUDIO_MODE_IN_CALL)) {
input_device_id = IN_SOURCE_BTMIC;
}else{
input_device_id = IN_SOURCE_MAINMIC;
}
ALOGV("fm record,****LINE:%d,FUNC:%s,bt_on:%d,headset_on:%d,main_mic_on;%d,adev->in_device:%x,AUDIO_DEVICE_IN_ALL_SCO:%x",__LINE__,__FUNCTION__,bt_on,headset_on,main_mic_on,adev->in_device,AUDIO_DEVICE_IN_ALL_SCO);
if (adev->mode == AUDIO_MODE_IN_CALL) {
input_route = cap_phone_normal_route_configs[CASE_NAME-1][input_device_id];
ALOGV("phone record,****LINE:%d,FUNC:%s, adev->in_device:%x",__LINE__,__FUNCTION__, adev->in_device);
} else if (adev->mode == AUDIO_MODE_FM) {
//fm_record_enable(true);
//fm_record_route(adev->in_device);
ALOGV("fm record,****LINE:%d,FUNC:%s",__LINE__,__FUNCTION__);
} else if (adev->mode == AUDIO_MODE_NORMAL) {//1
if(dmic_used)
input_route = dmic_cap_normal_route_configs[input_device_id];
else
input_route = cap_normal_route_configs[input_device_id];
ALOGV("normal record,****LINE:%d,FUNC:%s,adev->in_device:%d",__LINE__,__FUNCTION__,adev->in_device);
} else if (adev->mode == AUDIO_MODE_IN_COMMUNICATION) {
if(dmic_used)
input_route = dmic_cap_normal_route_configs[input_device_id];
else
input_route = cap_normal_route_configs[input_device_id];
F_LOG;
}
}
if (phone_route)
audio_route_apply_path(adev->ar, phone_route);
if (output_route)
audio_route_apply_path(adev->ar, output_route);
if (input_route)
audio_route_apply_path(adev->ar, input_route);
audio_route_update_mixer(adev->ar);
if (adev->mode == AUDIO_MODE_IN_CALL ){
if(bton_temp && last_call_path_is_bt == 0){
bt_start_call(adev);
last_call_path_is_bt = 1;
}
}
}
所以說,我們如果需要修改輸入設備的時候,可以在這個函數中根據相應的參數去更改,同樣,對於其他方案來說,也是可以參考這一套方法去實現的。
具體獲取pcm數據的方法介紹到這里,總結下RecordThread線程的作用:這個線程中,就會真正去獲取pcm數據,更新緩沖區中的數據,判斷當前是否處於overrun的狀態等等。
總結:
在startRecording函數中,他建立起了錄音通道路由route,並且開啟了應用層的錄音線程,並把錄音數據從驅動中讀取到AudioBuffer環形緩沖區來。此時錄音設備節點已經被open了,並開始read數據了
由於作者內功有限,若文章中存在錯誤或不足的地方,還請給位大佬指出,不勝感激!
