在文章《基於Allwinner的Audio子系統分析(Android-5.1)》中已經介紹了Audio的系統架構以及應用層調用的流程,接下來,繼續分析AudioRecorder方法中的getMinBufferSize的實現
函數原型:
public static int getMinBufferSize (int sampleRateInHz, int channelConfig, int audioFormat)
作用:
返回成功創建AudioRecord對象所需要的最小緩沖區大小
參數:
sampleRateInHz:默認采樣率,單位Hz,這里設置為44100,44100Hz是當前唯一能保證在所有設備上工作的采樣率;
channelConfig: 描述音頻聲道設置,這里設置為AudioFormat.CHANNEL_CONFIGURATION_MONO,CHANNEL_CONFIGURATION_MONO保證能在所有設備上工作;
audioFormat:音頻數據的采樣精度,這里設置為AudioFormat.ENCODING_16BIT;
返回值:
返回成功創建AudioRecord對象所需要的最小緩沖區大小。 注意:這個大小並不保證在負荷下的流暢錄制,應根據預期的頻率來選擇更高的值,AudioRecord實例在推送新數據時使用此值
如果硬件不支持錄制參數,或輸入了一個無效的參數,則返回ERROR_BAD_VALUE(-2),如果硬件查詢到輸出屬性沒有實現,或最小緩沖區用byte表示,則返回ERROR(-1)
接下來進入系統分析具體實現
frameworks/base/media/java/android/media/AudioRecord.java
static public int getMinBufferSize(int sampleRateInHz, int channelConfig, int audioFormat) {
int channelCount = 0;
switch (channelConfig) {
case AudioFormat.CHANNEL_IN_DEFAULT: // AudioFormat.CHANNEL_CONFIGURATION_DEFAULT //1
case AudioFormat.CHANNEL_IN_MONO: //16
case AudioFormat.CHANNEL_CONFIGURATION_MONO://2
channelCount = 1;
break;
case AudioFormat.CHANNEL_IN_STEREO: //12
case AudioFormat.CHANNEL_CONFIGURATION_STEREO://3
case (AudioFormat.CHANNEL_IN_FRONT | AudioFormat.CHANNEL_IN_BACK): // 16||32
channelCount = 2;
break;
case AudioFormat.CHANNEL_INVALID://0
default:
loge("getMinBufferSize(): Invalid channel configuration.");
return ERROR_BAD_VALUE;
}
// PCM_8BIT is not supported at the moment
if (audioFormat != AudioFormat.ENCODING_PCM_16BIT) {
loge("getMinBufferSize(): Invalid audio format.");
return ERROR_BAD_VALUE;
}
int size = native_get_min_buff_size(sampleRateInHz, channelCount, audioFormat);
if (size == 0) {
return ERROR_BAD_VALUE;
}
else if (size == -1) {
return ERROR;
}
else {
return size;
}
}
對音頻通道與音頻采樣精度進行判斷,單聲道(MONO)時channelCount為1,立體聲(STEREO)時channelCount為2,且A64上僅支持PCM_16BIT采樣,其值為2,然后繼續調用native函數
frameworks/base/core/jni/android_media_AudioRecord.cpp
static jint android_media_AudioRecord_get_min_buff_size(JNIEnv *env, jobject thiz,
jint sampleRateInHertz, jint channelCount, jint audioFormat) {
ALOGV(">> android_media_AudioRecord_get_min_buff_size(%d, %d, %d)",
sampleRateInHertz, channelCount, audioFormat);
size_t frameCount = 0;
//從java轉成jni的format類型
audio_format_t format = audioFormatToNative(audioFormat);//AUDIO_FORMAT_PCM_16_BIT=0x1
//獲取frameCount,並判斷硬件是否支持
status_t result = AudioRecord::getMinFrameCount(&frameCount,
sampleRateInHertz,
format,
audio_channel_in_mask_from_count(channelCount));
if (result == BAD_VALUE) {
return 0;
}
if (result != NO_ERROR) {
return -1;
}
return frameCount * channelCount * audio_bytes_per_sample(format);
}
調用服務端的函數,獲取frameCount大小,最后返回了frameCount*聲道數*采樣精度,其中frameCount表示最小采樣幀數,繼續分析frameCount的計算方法
frameworks/av/media/libmedia/AudioRecord.cpp
status_t AudioRecord::getMinFrameCount(
size_t* frameCount,
uint32_t sampleRate,
audio_format_t format,
audio_channel_mask_t channelMask)
{
if (frameCount == NULL) {
return BAD_VALUE;
}
size_t size;
status_t status = AudioSystem::getInputBufferSize(sampleRate, format, channelMask, &size);
if (status != NO_ERROR) {
ALOGE("AudioSystem could not query the input buffer size for sampleRate %u, format %#x, "
"channelMask %#x; status %d", sampleRate, format, channelMask, status);
return status;
}
//計算出最小的frame
// We double the size of input buffer for ping pong use of record buffer.
// Assumes audio_is_linear_pcm(format)
if ((*frameCount = (size * 2) / (audio_channel_count_from_in_mask(channelMask) *
audio_bytes_per_sample(format))) == 0) {
ALOGE("Unsupported configuration: sampleRate %u, format %#x, channelMask %#x",
sampleRate, format, channelMask);
return BAD_VALUE;
}
return NO_ERROR;
}
此時frameCount= size*2/(聲道數*采樣精度),注意這里需要double一下,而size是由hal層得到的,AudioSystem::getInputBufferSize()函數最終會調用到HAL層
frameworks/av/media/libmedia/AudioSystem.cpp
status_t AudioSystem::getInputBufferSize(uint32_t sampleRate, audio_format_t format,
audio_channel_mask_t channelMask, size_t* buffSize)
{
const sp<IAudioFlinger>& af = AudioSystem::get_audio_flinger();
if (af == 0) {
return PERMISSION_DENIED;
}
Mutex::Autolock _l(gLockCache);
// Do we have a stale gInBufferSize or are we requesting the input buffer size for new values
size_t inBuffSize = gInBuffSize;
if ((inBuffSize == 0) || (sampleRate != gPrevInSamplingRate) || (format != gPrevInFormat)
|| (channelMask != gPrevInChannelMask)) {
gLockCache.unlock();
inBuffSize = af->getInputBufferSize(sampleRate, format, channelMask);
gLockCache.lock();
if (inBuffSize == 0) {
ALOGE("AudioSystem::getInputBufferSize failed sampleRate %d format %#x channelMask %x",
sampleRate, format, channelMask);
return BAD_VALUE;
}
// A benign race is possible here: we could overwrite a fresher cache entry
// save the request params
gPrevInSamplingRate = sampleRate;
gPrevInFormat = format;
gPrevInChannelMask = channelMask;
gInBuffSize = inBuffSize;
}
*buffSize = inBuffSize;
return NO_ERROR;
}
這里通過get_audio_flinger獲取到了一個AudioFlinger對象
const sp<IAudioFlinger> AudioSystem::get_audio_flinger()
{
sp<IAudioFlinger> af;
sp<AudioFlingerClient> afc;
{
Mutex::Autolock _l(gLock);
if (gAudioFlinger == 0) {
sp<IServiceManager> sm = defaultServiceManager();
sp<IBinder> binder;
do {
binder = sm->getService(String16("media.audio_flinger"));
if (binder != 0)
break;
ALOGW("AudioFlinger not published, waiting...");
usleep(500000); // 0.5 s
} while (true);
if (gAudioFlingerClient == NULL) {
gAudioFlingerClient = new AudioFlingerClient();
} else {
if (gAudioErrorCallback) {
gAudioErrorCallback(NO_ERROR);
}
}
binder->linkToDeath(gAudioFlingerClient);
gAudioFlinger = interface_cast<IAudioFlinger>(binder);
LOG_ALWAYS_FATAL_IF(gAudioFlinger == 0);
afc = gAudioFlingerClient;
}
af = gAudioFlinger;
}
if (afc != 0) {
af->registerClient(afc);
}
return af;
}
然后判斷是否參數是之前配置過的參數,這樣做是為了防止重復多次調用getMinBufferSize導致占用硬件資源,所以當第一次調用或更新參數調用后,則調用AF中的getInputBufferSize方法獲取BuffSize,而af是IAudioFlinger類型的智能指針,所以實際上會通過binder到達AudioFlinger中
frameworks\av\services\audioflinger\AudioFlinger.cpp
size_t AudioFlinger::getInputBufferSize(uint32_t sampleRate, audio_format_t format,
audio_channel_mask_t channelMask) const
{
status_t ret = initCheck();
if (ret != NO_ERROR) {
return 0;
}
AutoMutex lock(mHardwareLock);
mHardwareStatus = AUDIO_HW_GET_INPUT_BUFFER_SIZE;
audio_config_t config;
memset(&config, 0, sizeof(config));
config.sample_rate = sampleRate;
config.channel_mask = channelMask;
config.format = format;
audio_hw_device_t *dev = mPrimaryHardwareDev->hwDevice();
size_t size = dev->get_input_buffer_size(dev, &config);
mHardwareStatus = AUDIO_HW_IDLE;
return size;
}
把參數傳遞給hal層,獲取buffer大小
hardware\aw\audio\tulip\audio_hw.c
static size_t adev_get_input_buffer_size(const struct audio_hw_device *dev,
const struct audio_config *config)
{
size_t size;
int channel_count = popcount(config->channel_mask);
if (check_input_parameters(config->sample_rate, config->format, channel_count) != 0)
return 0;
return get_input_buffer_size(config->sample_rate, config->format, channel_count);
}
再次檢查一次參數是否正確,為什么在很多函數里面都做一次檢查參數呢?可能在其他的地方也調用到了這個函數,所以最好做一次檢查,確保萬無一失
static size_t get_input_buffer_size(uint32_t sample_rate, int format, int channel_count)
{
size_t size;
size_t device_rate;
if (check_input_parameters(sample_rate, format, channel_count) != 0)
return 0;
/* take resampling into account and return the closest majoring
multiple of 16 frames, as audioflinger expects audio buffers to
be a multiple of 16 frames */
size = (pcm_config_mm_in.period_size * sample_rate) / pcm_config_mm_in.rate;
size = ((size + 15) / 16) * 16;
return size * channel_count * sizeof(short);
}
這里包含一個結構體struct pcm_config,定義了一個周期包含了多少采樣幀,並根據結構體的rate數據進行重采樣計算,這里的rate是以MM_SAMPLING_RATE為標准,即44100,一個采樣周期有1024個采樣幀,然后計算出重采樣之后的size
同時audioflinger的音頻buffer是16的整數倍,所以再次計算得出一個最接近16倍的整數,最后返回size*聲道數*1幀數據所占字節數
struct pcm_config pcm_config_mm_in = {
.channels = 2,
.rate = MM_SAMPLING_RATE,
.period_size = 1024,
.period_count = CAPTURE_PERIOD_COUNT,
.format = PCM_FORMAT_S16_LE,
};
總結:
minBuffSize = ((((((((pcm_config_mm_in.period_size * sample_rate) / pcm_config_mm_in.rate) + 15) / 16) * 16) * channel_count * sizeof(short)) * 2) / (audio_channel_count_from_in_mask(channelMask) * audio_bytes_per_sample(format))) * channelCount * audio_bytes_per_sample(format);
=(((((((pcm_config_mm_in.period_size * sample_rate) / pcm_config_mm_in.rate) + 15) / 16) * 16) * channel_count * sizeof(short)) * 2)
其中:pcm_config_mm_in.period_size=1024;pcm_config_mm_in.rate=44100;這里我們可以看到他除掉(channelCount*format),后面又乘回來了,這個是因為在AudioRecord.cpp對frameCount進行了一次校驗,判斷是否支持該參數的設置。
以getMinBufferSize(44100, MONO, 16BIT);為例,即sample_rate=44100,channel_count=1, format=2,那么
BufferSize = (((1024*sample_rate/44100)+15)/16)*16*channel_count*sizeof(short)*2 = 4096
即最小緩沖區大小為:周期大小 * 重采樣 * 采樣聲道數 * 2 * 采樣精度所占字節數;這里的2的解釋為We double the size of input buffer for ping pong use of record buffer,采樣精度:PCM_8_BIT為unsigned char,PCM_16_BIT為short,PCM_32_BIT為int。
由於作者內功有限,若文章中存在錯誤或不足的地方,還請給位大佬指出,不勝感激!
