Android M AudioPolicy 分析


1.AudioPolicyService基礎

AudioPolicy在Android系統中主要負責Audio"策略"相關的問題。它和AudioFlinger一起組成了Android Audio系統的兩個服務。一個負責管理audio的“路由”,一個負責管理audio“設備”。在Android M 版本的系統中,這兩個服務都是在系統啟動的過程中,通過MediaServer來加載的。

AudioPolicyService在Android Audio系統中主要完成以下幾個任務:

①管理輸入輸出設備,包括設備的連接、斷開狀態,設備的選擇和切換等

②管理系統的音頻策略,比如通話時播放音樂、或者播放音樂時來電話的一系列處理

③管理系統的音量

④上層的一些音頻參數也可以通過AudioPolicyService設置到底層去

2.AudioPolicyService啟動流程

 

AudioPolicyService服務運行在mediaserver進程中, 隨着mediaserver進程啟動而啟動。

// frameworks/av/media/mediaserver/main_mediaserver.cpp
int main(int argc __unused, char** argv)
{
    ......
    ......
    AudioFlinger::instantiate();
    MediaPlayerService::instantiate();
    AudioPolicyService::instantiate();
    ......
}

AudioFlinger::instantiate()並不屬於AudioFlinger的內部類,而是BinderService類的一個實現包括AudioFlinger,AudioPolicy等在內的幾個服務都繼承自這個統一的Binder的服務類,具體實現在BinderService.h中

// frameworks/native/include/binder/BinderService.h
static void instantiate() { publish(); }

代碼只有一行,就是調用了自己的publish()函數,我們就去分析一下這個publish()函數

static status_t publish(bool allowIsolated = false) 
{
    sp<IServiceManager> sm(defaultServiceManager());
    return sm->addService( String16(SERVICE::getServiceName()),new SERVICE(), allowIsolated);
}

SERVICE是文件中定義的一個模板,AudioPolicyService調用了instantiate()函數,所以當前的SERVICE為AudioPolicyService

//  frameworks/native/include/binder/BinderService.h
template<typename SERVICE>

可以看出publish()函數所做的事獲取到ServiceManager的代理,然后nwe一個調用instantiate的那個service的對象並把它添加到ServiceManager中。

  所以下一步就是去分析AudioPolicyService的構造函數了

// frameworks/av/services/audiopolicy/service/AudioPolicyService.cpp
AudioPolicyService::AudioPolicyService() : BnAudioPolicyService(), 
                                    mpAudioPolicy(NULL),
                                    mAudioPolicyManager(NULL),   
                                    mAudioPolicyClient(NULL), 
                                    mPhoneState(AUDIO_MODE_INVALID)
{
}

 然后發現它的構造函數里面除了初始化了一些變量之外似乎並沒有做其他的事情,既然這樣,那么AudioPolicyService所作的初始化的事情到底是在什么地方進行的呢,繼續分析上面的構造函數,AudioPolicyService是繼承自BnAudioPolicyService的,一步步往上推,最終發現它的祖先是RefBase,根據強指針的特性,目標對象在第一次被引用時會調用onFirstRef()的,現在目標明確了,我們就去看一下AudioPolicyService::onFirstRef()里面有沒有我們比較關心的信息。

// frameworks/av/services/audiopolicy/service/AudioPolicyService.cpp
void AudioPolicyService::onFirstRef()
{
    ......
    //用於播放tone音
    mTonePlaybackThread = new AudioCommandThread(String8("ApmTone"), this);
    //用於執行audio命令
    mAudioCommandThread = new AudioCommandThread(String8("ApmAudio"), this);
    //用於執行輸出命令
    mOutputCommandThread = new AudioCommandThread(String8("ApmOutput"), this);
    #ifdef USE_LEGACY_AUDIO_POLICY
    //因為USE_LEGACY_AUDIO_POLICY在當前源碼樹中未定義
    //所以這個不成立,走#else
    ......
    #else
    ALOGI("AudioPolicyService CSTOR in new mode");
    mAudioPolicyClient = new AudioPolicyClient(this);                                    
    mAudioPolicyManager = createAudioPolicyManager(mAudioPolicyClient);
    #endif
}

 可以看出在第一次被強引用時AudioPolicyService創建了3個AudioCommandThread和AudioPolicyManager。

  看一下看一下創建了3個AudioCommandThread之后做了哪些事。首先它直接new了一個AudioPolicyClient,AudioPolicyClient類定義在AudioPolicyService.h中

//frameworks/av/services/audiopolicy/service/AudioPolicyService.h
class AudioPolicyClient : public AudioPolicyClientInterface
{
    ......
}

他的實現在frameworks/av/services/audiopolicy/service/AudioPolicyClientImpl.cpp中。創建完了AudioPolicyClient之后通過調用createAudioPolicyManager方法創建了一個AudioPolicyManager對象下面看一下createAudioPolicyManager方法中是怎樣創建AudioPolicyManager的。

//frameworks/av/services/audiopolicy/manager/AudioPolicyFactory.cpp

extern "C" AudioPolicyInterface* createAudioPolicyManager(
        AudioPolicyClientInterface *clientInterface)
{
    return new AudioPolicyManager(clientInterface);
}

可見他是直接new了一個AudioPolicyManager然后把剛才創建的AudioPolicyClient傳了進去。

//  frameworks/av/services/audiopolicy/managerdefault/AudioPolicyManager.cpp
AudioPolicyManager::AudioPolicyManager(AudioPolicyClientInterface *clientInterface)
{
    ......
    mEngine->setObserver(this);
    ......
}

mEngine->setObserver(this)把mApmObserver綁定為AudioPolicyManager對象,所以在frameworks/av/services/audiopolicy/enginedefault/Engine.cpp中調用mApmObserver->xxx()都是調用的AudioPolicyManager類中的成員函數。 Engine類里要用到AudioPOlicyManager類里的成員函數就用mApmObserver->xxx(),

AudioPolicyManager類里要使用Engine類的成員函數就用mEngine->xxx()

3.AudioPolicyManager分析

AudioPolicyService中的絕大部分功能,如音頻策略的管理、輸入輸出設備的管理、設備切換和音量調節等功能都是通過AudioPolicyManager來實現的。AudioPolicyManager是由AudioPolicyService通過

mAudioPolicyManager = createAudioPolicyManager(mAudioPolicyClient);

創建的,看一下createAudioPolicyManager創建AudioPolicyManager的流程

// frameworks/av/services/audiopolicy/manager/AudioPolicyFactory.cpp
extern "C" AudioPolicyInterface* createAudioPolicyManager(
        AudioPolicyClientInterface *clientInterface)
{
    return new AudioPolicyManager(clientInterface);
}

它是通過直接new了一個AudioPolicyManager來創建的,所以我們可以直接去分析AudioPolicyManager的構造函數了。

//frameworks/av/services/audiopolicy/managerdefault/AudioPolicyManager.cpp
AudioPolicyManager::AudioPolicyManager(AudioPolicyClientInterface *clientInterface)
{
    //1.加載audio_policy.conf配置文件 /vendor/etc/audio_policy.conf
    ConfigParsingUtils::loadAudioPolicyConfig(....);
    //2.初始化各種音頻流對應的音量調節點
    mEngine->initializeVolumeCurves(mSpeakerDrcEnabled);
    //3.加載audio policy硬件抽象庫
    mHwModules[i]->mHandle =  mpClientInterface->loadHwModule(mHwModules[i]->mName);
    //4.打開輸出設備
    mpClientInterface->openOutput(....);
    //5.保存輸出設備描述符對象
    addOutput(output, outputDesc);
    //6.設置輸出設備
    setOutputDevice(....);
    //7.更新輸出設備
    updateDevicesAndOutputs();
}

大致的流程如下所示

 1.加載audio_policy.conf配置文件 /system/etc/audio_policy.conf(對於模擬器而言走的是defaultAudioPolicyConfig())

  在AudioPolicyManager創建過程中會通過加載audio_policy.conf配置文件來加載音頻設備,Android為每種音頻接口定義了對應的硬件抽象層。硬件抽象層代碼參考寫法

hardware/libhardware/modules/audio

 

external/bluetooth/bluedroid/audio_a2dp_hw/

audio.a2dp.default.so

hardware/libhardware/modules/audio/

audio.primary.default.so

hardware/libhardware/modules/usbaudio/

audio.usb.default.so

 

  每種音頻接口定義了不同的輸入輸出,一個接口可以具有多個輸入或者輸出,每個輸入輸出可以支持不同的設備,通過讀取audio_policy.conf文件可以獲取系統支持的音頻接口參數,在AudioPolicyManager中會優先加載/vendor/etc/audio_policy.conf配置文件, 如果該配置文件不存在, 則加載/system/etc/audio_policy.conf配置文件。AudioPolicyManager加載完所有音頻接口后,就知道了系統支持的所有音頻接口參數,可以為音頻輸出提供決策。

  audio_policy.conf同時定義了多個audio接口,每一個audio接口包含若干output和input,而每個output和input又同時支持多種輸入輸出模式,每種輸入輸出模式又支持若干種設備.

ConfigParsingUtils::loadAudioPolicyConfig(....);

分成兩部分,第一部分是解析全局標簽,第二部分是解析audio_hw_modules標簽,其子標簽都表示hardware module,有primary和r_submix兩種hardware module都被解析到mHwModules,hardware module的子標簽有outputs和inputs,outputs的各個子標簽被解析到mHwModules 的 mOutputProfiles,inputs的各個子標簽被解析到mHwModules 的 mInputProfiles。

2.初始化各種音頻流對應的音量調節點

mEngine->initializeVolumeCurves(mSpeakerDrcEnabled);

3.加載audio policy硬件抽象庫

mHwModules[i]->mHandle = mpClientInterface->loadHwModule(mHwModules[i]->mName);

//frameworks/av/services/audiopolicy/service/AudioPolicyClientImpl.cpp
audio_module_handle_t AudioPolicyService::AudioPolicyClient::loadHwModule(const char *name)
{
    ALOGD("tian--test");
    sp<IAudioFlinger> af = AudioSystem::get_audio_flinger();
    if (af == 0) {
        ALOGW("%s: could not get AudioFlinger", __func__);
        return 0;
    }
    return af->loadHwModule(name);
}

這里直接調用了AudioFlinger::loadHwModule()。下面進入到AudioFlinger.cpp中看一下它做了那些事

// 當AudioPolicyManager構造時,它會讀取廠商關於音頻的描述文件audio_policy.conf
//然后根據此來打開音頻接口,這一過程最終會調用到AudioFlinger::loadHwModule
audio_module_handle_t AudioFlinger::loadHwModule(const char *name)
{
    if (name == NULL) {
        return 0;
    }
    if (!settingsAllowed()) {
        return 0;
    }
    Mutex::Autolock _l(mLock);
    return loadHwModule_l(name);
}

最終調用到了loadHwModule_l()函數,繼續進去查看

//name 取值:
static const char * const audio_interfaces[] = {
    AUDIO_HARDWARE_MODULE_ID_PRIMARY,//主音頻設備,必須存在
    AUDIO_HARDWARE_MODULE_ID_A2DP,//藍牙A2DP
    AUDIO_HARDWARE_MODULE_ID_USB,//USB音頻
};

audio_module_handle_t AudioFlinger::loadHwModule_l(const char *name)
{
    //1.是否已經加載過這個interface
    for (size_t i = 0; i < mAudioHwDevs.size(); i++) {
        if (strncmp(mAudioHwDevs.valueAt(i)->moduleName(), name, strlen(name)) == 0) {
            ALOGW("loadHwModule() module %s already loaded", name);
            return mAudioHwDevs.keyAt(i);
        }
    }

    //2.加載audio interface
audio_hw_device_t *dev;
int rc = load_audio_interface(name, &dev);
......
//3.初始化
rc = dev->init_check(dev);
//4.添加到全局變量中
audio_module_handle_t handle = nextUniqueId();
mAudioHwDevs.add(handle, new AudioHwDevice(handle, name, dev, flags));
}

 

第二步加載指定的audio interface 如primary,a2dp或usb函數load_audio_interface()

用來加載設備所需的庫文件,然后打開設備並創建一個audio_hw_device_t實例,音頻設備接口所對應的庫文件名稱是有一定格式的,如a2dp的模塊名可能是audio.a2dp.so或audio.a2dp.default.so等,查找路徑主要有兩個/system/lib/hw和/vendor/lib/hw,看一下它是怎么實現的

//frameworks/av/services/audioflinger/AudioFlinger.cpp

static int load_audio_interface(const char *if_name, audio_hw_device_t **dev)
{
......
//1.獲取音頻模塊              //被定義為audio
rc = hw_get_module_by_class(AUDIO_HARDWARE_MODULE_ID, if_name, &mod);
//2.打開音頻設備
rc = audio_hw_device_open(mod, dev);
......
return rc;
}
int hw_get_module_by_class(const char *class_id, const char *inst,
                           const struct hw_module_t **module)
{
    int i = 0;
    char prop[PATH_MAX] = {0};
    char path[PATH_MAX] = {0};
    char name[PATH_MAX] = {0};
    char prop_name[PATH_MAX] = {0};

    //拼接字符串
    if (inst)
        snprintf(name, PATH_MAX, "%s.%s", class_id, inst);
    else
        strlcpy(name, class_id, PATH_MAX);

    /*
     * Here we rely on the fact that calling dlopen multiple times on
     * the same .so will simply increment a refcount (and not load
     * a new copy of the library).
     * We also assume that dlopen() is thread-safe.
     */
     //去prop系統查找
    /* First try a property specific to the class and possibly instance */
    snprintf(prop_name, sizeof(prop_name), "ro.hardware.%s", name);
    if (property_get(prop_name, prop, NULL) > 0) {
        if (hw_module_exists(path, sizeof(path), name, prop) == 0) {
            goto found;
        }
    }

    /* Loop through the configuration variants looking for a module */
    for (i=0 ; i<HAL_VARIANT_KEYS_COUNT; i++) {
        if (property_get(variant_keys[i], prop, NULL) == 0) {
            continue;
        }
        if (hw_module_exists(path, sizeof(path), name, prop) == 0) {
            goto found;
        }
    }

    /* Nothing found, try the default */
    if (hw_module_exists(path, sizeof(path), name, "default") == 0) {
        goto found;
    }

    return -ENOENT;

found:
    /* load the module, if this fails, we're doomed, and we should not try
     * to load a different variant. */
     //加載
    return load(class_id, path, module);
}

字符串path被拼接為了/system/lib/hw/audio.xxx.so然后調用load函數laod函數通過handle = dlopen(path, RTLD_NOW);打開了/system/lib/hw/audio.xxx.so並返回一個句柄。

//hardware/libhardware/hardware.c
static int load(const char *id,
        const char *path,
        const struct hw_module_t **pHmi)
{
    ….
    //通過dlopen()方法來打開動態庫
    handle = dlopen(path, RTLD_NOW);
//通過dlsym來加載動態庫里的hmi
……
    const char *sym = HAL_MODULE_INFO_SYM_AS_STR;
    hmi = (struct hw_module_t *)dlsym(handle, sym)
    ….
}

打開輸出設備

//frameworks/av/services/audiopolicy/service/AudioPolicyClientImpl.cpp
status_t AudioPolicyService::AudioPolicyClient::openOutput(audio_module_handle_t module,
                                                           audio_io_handle_t *output,
                                                           audio_config_t *config,
                                                           audio_devices_t *devices,
                                                           const String8& address,
                                                           uint32_t *latencyMs,
                                                           audio_output_flags_t flags)
{
    sp<IAudioFlinger> af = AudioSystem::get_audio_flinger();
    if (af == 0) {
        ALOGW("%s: could not get AudioFlinger", __func__);
        return PERMISSION_DENIED;
    }
    return af->openOutput(module, output, config, devices, address, latencyMs, flags);
}

 

在此處調用到了AudioFlinger:: openOutput(…..)。

status_t AudioFlinger::openOutput(audio_module_handle_t module,
                                  audio_io_handle_t *output,
                                  audio_config_t *config,
                                  audio_devices_t *devices,
                                  const String8& address,
                                  uint32_t *latencyMs,
                                  audio_output_flags_t flags)
{
    ALOGI("openOutput(), module %d Device %x, SamplingRate %d, Format %#08x, Channels %x, flags %x",
              module,
              (devices != NULL) ? *devices : 0,
              config->sample_rate,
              config->format,
              config->channel_mask,
              flags);

    if (*devices == AUDIO_DEVICE_NONE) {
        return BAD_VALUE;
    }

    Mutex::Autolock _l(mLock);
   // ALOGD("tian-openOutput");
    sp<PlaybackThread> thread = openOutput_l(module, output, config, *devices, address, flags);
    if (thread != 0) {
        *latencyMs = thread->latency();

        // notify client processes of the new output creation
        thread->ioConfigChanged(AUDIO_OUTPUT_OPENED);

        // the first primary output opened designates the primary hw device
        if ((mPrimaryHardwareDev == NULL) && (flags & AUDIO_OUTPUT_FLAG_PRIMARY)) {
            ALOGI("Using module %d has the primary audio interface", module);
            mPrimaryHardwareDev = thread->getOutput()->audioHwDev;

            AutoMutex lock(mHardwareLock);
            mHardwareStatus = AUDIO_HW_SET_MODE;
            mPrimaryHardwareDev->hwDevice()->set_mode(mPrimaryHardwareDev->hwDevice(), mMode);
            mHardwareStatus = AUDIO_HW_IDLE;
        }
        return NO_ERROR;
    }

    return NO_INIT;
}

下面看一下openOutput_l()這個函數

//輸入參數中的module是由前面loadNodule來獲得的,它是一個audio interface的id號
//可以通過此id在mAudioHwSevs中查找對應的AudioHwDevice對象
//這個方法中會將打開的output加到mPlaybackThreads線程中
sp<AudioFlinger::PlaybackThread> AudioFlinger::openOutput_l(audio_module_handle_t module,
                                                            audio_io_handle_t *output,
                                                            audio_config_t *config,
                                                            audio_devices_t devices,
                                                            const String8& address,
                                                            audio_output_flags_t flags)
{
    //1.查找相應的audio interface
AudioHwDevice *outHwDev = findSuitableHwDev_l(module, devices);
if (outHwDev == NULL) {
        return 0;
    }

    audio_hw_device_t *hwDevHal = outHwDev->hwDevice();
    if (*output == AUDIO_IO_HANDLE_NONE) {
        *output = nextUniqueId();
    }
//2.為設備打開一個輸出流,創建Audio HAL的音頻輸出對象
    status_t status = outHwDev->openOutputStream(
            &outputStream,
            *output,
            devices,
            flags,
            config,
            address.string());
//3. 創建playbackThread
    if (status == NO_ERROR) {
        PlaybackThread *thread;
        if (flags & AUDIO_OUTPUT_FLAG_COMPRESS_OFFLOAD) {
            ALOGD("tian-OffloadThread");
          //  thread = new OffloadThread(this, outputStream, *output, devices, mSystemReady);
            ALOGV("openOutput_l() created offload output: ID %d thread %p", *output, thread);
            
        } else if ((flags & AUDIO_OUTPUT_FLAG_DIRECT)
                || !isValidPcmSinkFormat(config->format)
                || !isValidPcmSinkChannelMask(config->channel_mask)) {
                ALOGD("tian-DirectOutputThread");
            thread = new DirectOutputThread(this, outputStream, *output, devices, mSystemReady);
            ALOGV("openOutput_l() created direct output: ID %d thread %p", *output, thread);
        } else {
            //一般是創建混音線程,代表AudioStreamOut對象的output也傳遞進去了
            thread = new MixerThread(this, outputStream, *output, devices, mSystemReady);
            ALOGV("openOutput_l() created mixer output: ID %d thread %p", *output, thread);
        }
        mPlaybackThreads.add(*output, thread);//添加播放線程
        return thread;
    }

 


免責聲明!

本站轉載的文章為個人學習借鑒使用,本站對版權不負任何法律責任。如果侵犯了您的隱私權益,請聯系本站郵箱yoyou2525@163.com刪除。



 
粵ICP備18138465號   © 2018-2025 CODEPRJ.COM