Audio系统启动及初始化

基于Android P版本分析

Audio系统启动及初始化

Android系统启动过程可以简单概括为两大步骤:

  1. Linux内核启动;
  2. Android架构启动;

Linux内核启动:

主要是bootLoader的启动和初始化驱动,安装文件系统等。在Linux内核启动的最后,会启动第一个用于进程init进程,它是所有用户进程的父进程,由此进入了Android架构的启动阶段;

Android架构启动:

init进程启动后会自动加载init.rc(/system/core/rootdir/init.rc)脚本,当执行mount_all指令挂在分区时,会加载包括system、vendor、odm等目录下的xxx/etc/init目录下的所有rc脚本;

audioserver.rc

其实和之前分析cameraserver的启动流程一样,都是通过init进程加载对应server的rc文件,来创建对应的进程并启动;

通过分析代码,我们发现在/frameworks/av/media/目录下有一个audioserver文件夹,里面有一个audioserver.rc文件:

ini 复制代码
# 通过init进程启动audioserver进程,启动路径为/system/bin/audioserver
service audioserver /system/bin/audioserver
    # 启动入口
    class core
    # 用户名
    user audioserver
    # media gid needed for /dev/fm (radio) and for /data/misc/media (tee)
    # 分组,从这里可以看出,audio和camera属于同一组
    group audio camera drmrpc inet media mediadrm net_bt net_bt_admin net_bw_acct
    ioprio rt 4
    writepid /dev/cpuset/foreground/tasks /dev/stune/foreground/tasks
    onrestart restart vendor.audio-hal-2-0
    # Keep the original service name for backward compatibility when upgrading
    # O-MR1 devices with framework-only.
    onrestart restart audio-hal-2-0
​
on property:vts.native_server.on=1
    stop audioserver
on property:vts.native_server.on=0
    start audioserver

在该文件中首先先定义了audioserver的加载路径,然后定义启动入口、用户名以及分组等信息;

因为init进程会加载包括system、vendor、odm等目录下的xxx/etc/init目录下的所有rc脚本,所以cameraserver.rc也会被init进程加载,加载方式是通过在Android.mk文件中定义加载路径:

makefile 复制代码
LOCAL_PATH:= $(call my-dir)
​
include $(CLEAR_VARS)
​
LOCAL_SRC_FILES := \
    main_audioserver.cpp \
    ../libaudioclient/aidl/android/media/IAudioRecord.aidl
​
LOCAL_SHARED_LIBRARIES := \
    libaaudioservice \
    libaudioflinger \
    libaudiopolicyservice \
    libbinder \
    libcutils \
    liblog \
    libhidltransport \
    libhwbinder \
    libmedia \
    libmedialogservice \
    libnbaio \
    libsoundtriggerservice \
    libutils
​
# TODO oboeservice is the old folder name for aaudioservice. It will be changed.
LOCAL_C_INCLUDES := \
    frameworks/av/services/audioflinger \
    frameworks/av/services/audiopolicy \
    frameworks/av/services/audiopolicy/common/managerdefinitions/include \
    frameworks/av/services/audiopolicy/common/include \
    frameworks/av/services/audiopolicy/engine/interface \
    frameworks/av/services/audiopolicy/service \
    frameworks/av/services/medialog \
    frameworks/av/services/oboeservice \
    frameworks/av/services/radio \
    frameworks/av/services/soundtrigger \
    frameworks/av/media/libaaudio/include \
    frameworks/av/media/libaaudio/src \
    frameworks/av/media/libaaudio/src/binding \
    frameworks/av/media/libmedia \
    $(call include-path-for, audio-utils) \
    external/sonic \
​
LOCAL_AIDL_INCLUDES := \
        frameworks/av/media/libaudioclient/aidl
​
# If AUDIOSERVER_MULTILIB in device.mk is non-empty then it is used to control
# the LOCAL_MULTILIB for all audioserver exclusive libraries.
# This is relevant for 64 bit architectures where either or both
# 32 and 64 bit libraries may be built.
#
# AUDIOSERVER_MULTILIB may be set as follows:
#   32      to build 32 bit audioserver libraries and 32 bit audioserver.
#   64      to build 64 bit audioserver libraries and 64 bit audioserver.
#   both    to build both 32 bit and 64 bit libraries,
#           and use primary target architecture (32 or 64) for audioserver.
#   first   to build libraries and audioserver for the primary target architecture only.
#   <empty> to build both 32 and 64 bit libraries and 32 bit audioserver.
​
ifeq ($(strip $(AUDIOSERVER_MULTILIB)),)
LOCAL_MULTILIB := 32
else
LOCAL_MULTILIB := $(AUDIOSERVER_MULTILIB)
endif
​
LOCAL_MODULE := audioserver
​
LOCAL_INIT_RC := audioserver.rc
​
LOCAL_CFLAGS := -Werror -Wall
​
include $(BUILD_EXECUTABLE)

其中LOCAL_INIT_RC := audioserver.rc将audioserver.rc挂载到/system/etc/init目录中,这样init.rc就可以调用audioserver.rc;

Android.mk中还指明了具体执行的源文件为main_audioserver.cpp,该文件和audioserver.rc在同一目录中。

所以init进程加载audioserver.rc文件,会调用rc文件中指定的入口函数main()初始化audioserver;

main_audioserver.cpp

arduino 复制代码
/*
 * Copyright (C) 2015 The Android Open Source Project
 *
 * Licensed under the Apache License, Version 2.0 (the "License");
 * you may not use this file except in compliance with the License.
 * You may obtain a copy of the License at
 *
 *      http://www.apache.org/licenses/LICENSE-2.0
 *
 * Unless required by applicable law or agreed to in writing, software
 * distributed under the License is distributed on an "AS IS" BASIS,
 * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
 * See the License for the specific language governing permissions and
 * limitations under the License.
 */
​
#define LOG_TAG "audioserver"
//#define LOG_NDEBUG 0
​
#include <fcntl.h>
#include <sys/prctl.h>
#include <sys/wait.h>
#include <cutils/properties.h>
​
#include <binder/IPCThreadState.h>
#include <binder/ProcessState.h>
#include <binder/IServiceManager.h>
#include <hidl/HidlTransportSupport.h>
#include <utils/Log.h>
​
// from LOCAL_C_INCLUDES
#include "aaudio/AAudioTesting.h"
#include "AudioFlinger.h"
#include "AudioPolicyService.h"
#include "AAudioService.h"
#include "utility/AAudioUtilities.h"
#include "MediaLogService.h"
#include "MediaUtils.h"
#include "SoundTriggerHwService.h"
​
using namespace android;
​
int main(int argc __unused, char **argv)
{
    // TODO: update with refined parameters
    limitProcessMemory(
        "audio.maxmem", /* "ro.audio.maxmem", property that defines limit */
        (size_t)512 * (1 << 20), /* SIZE_MAX, upper limit in bytes */
        20 /* upper limit as percentage of physical RAM */);
​
    signal(SIGPIPE, SIG_IGN);
​
    bool doLog = (bool) property_get_bool("ro.test_harness", 0);
​
    pid_t childPid;
    // FIXME The advantage of making the process containing media.log service the parent process of
    // the process that contains the other audio services, is that it allows us to collect more
    // detailed information such as signal numbers, stop and continue, resource usage, etc.
    // But it is also more complex.  Consider replacing this by independent processes, and using
    // binder on death notification instead.
    if (doLog && (childPid = fork()) != 0) {
        // 用于启动Log线程
        ........................
    } else {
        // all other services
        if (doLog) {
            prctl(PR_SET_PDEATHSIG, SIGKILL);   // if parent media.log dies before me, kill me also
            setpgid(0, 0);                      // but if I die first, don't kill my parent
        }
        android::hardware::configureRpcThreadpool(4, false /*callerWillJoin*/);
        sp<ProcessState> proc(ProcessState::self());
        sp<IServiceManager> sm = defaultServiceManager();
        ALOGI("ServiceManager: %p", sm.get());
        AudioFlinger::instantiate();
        AudioPolicyService::instantiate();
​
        // AAudioService should only be used in OC-MR1 and later.
        // And only enable the AAudioService if the system MMAP policy explicitly allows it.
        // This prevents a client from misusing AAudioService when it is not supported.
        // AAudioService应该只在OC-MR1和更高版本中使用。
        //只有在系统MMAP策略显式允许的情况下,才启用AAudioService。
        //这可以防止客户端在不支持AAudioService时误用AAudioService。
        aaudio_policy_t mmapPolicy = property_get_int32(AAUDIO_PROP_MMAP_POLICY,
                                                        AAUDIO_POLICY_NEVER);
        if (mmapPolicy == AAUDIO_POLICY_AUTO || mmapPolicy == AAUDIO_POLICY_ALWAYS) {
            AAudioService::instantiate();
        }
​
        SoundTriggerHwService::instantiate();
        ProcessState::self()->startThreadPool();
        IPCThreadState::self()->joinThreadPool();
    }
}

和cameraserver的启动流程非常类似,调用android::hardware::configureRpcThreadpool(4, false /* callerWillJoin */)方法,用于设置HIDL线程个数;

然后调用AudioFlinger::instantiate()初始化AudioFlinger;

调用AudioPolicyService::instantiate()初始化AudioPolicyService;

调用SoundTriggerHwService::instantiate()初始化SoundTriggerHwService,SoundTriggerHwService应该是用于语音识别的service;

在初始化SoundTriggerHwService之前,还有一个AAudioService是否初始化的判断逻辑,我们首先先看一下AAudioService是什么;

AAudio和MMAP

AAudio 是 Android 8.0 版本中引入的一种音频 API。Android 8.1 版本提供了增强功能,在与支持 MMAP 的 HAL 和驱动程序结合使用时,可缩短延迟时间

AAudio 架构

AAudio是一种新的本地 C API,可提供 Open SL ES 的替代方案。它使用"构建器"设计模式来创建音频流。

AAudio 提供了一个低延迟数据路径。在"专有"模式下,使用该功能可将客户端应用代码直接写入与 ALSA 驱动程序共享的内存映射缓冲区。在"共享"模式下,MMAP 缓冲区由 AudioServer 中运行的混音器使用。在"专有"模式下,由于数据会绕过混音器,延迟时间会明显缩短

我们这里暂不对AAudio架构做详细的分析,后续有机会再分析;

我们根据services的启动顺序,依次对AudioFlinger、AudioPolicyService以及SoundTriggerHwService的初始化进行分析;

AudioFlinger

instantiate()

在上述过程中调用了AudioFlinger::instantiate()方法初始化了AudioFlinger,instantiate()方法定义在BinderService中,我们先看一下AudioFlinger继承关系的定义,在AudioFlinger.h中:

arduino 复制代码
........................
namespace android {
​
class AudioMixer;
class AudioBuffer;
class AudioResampler;
class DeviceHalInterface;
class DevicesFactoryHalInterface;
class EffectsFactoryHalInterface;
class FastMixer;
class PassthruBufferProvider;
class RecordBufferConverter;
class ServerProxy;
​
// ----------------------------------------------------------------------------
​
static const nsecs_t kDefaultStandbyTimeInNsecs = seconds(3);
​
#define INCLUDING_FROM_AUDIOFLINGER_H
​
class AudioFlinger :
    public BinderService<AudioFlinger>,
    public BnAudioFlinger
{
    friend class BinderService<AudioFlinger>;   // for AudioFlinger()
​
public:
    static const char* getServiceName() ANDROID_API { return "media.audio_flinger"; }
    ........................

其中定义了AudioFlinger继承了BinderService,BinderService是模板类,主要封装了Binder,将Binder添加到ServiceManager,即将AudioFlinger添加到ServiceMnager进行维护;

arduino 复制代码
// ---------------------------------------------------------------------------
namespace android {
​
template<typename SERVICE>
class BinderService
{
public:
    static status_t publish(bool allowIsolated = false,
                            int dumpFlags = IServiceManager::DUMP_FLAG_PRIORITY_DEFAULT) {
        sp<IServiceManager> sm(defaultServiceManager());
        // getServiceName()返回值为:media.camera,在CameraService.h中定义
        // SERVICE:在BinderService.h中定义的 -- template<typename SERVICE>
        return sm->addService(String16(SERVICE::getServiceName()), new SERVICE(), allowIsolated,
                              dumpFlags);
    }
​
    static void publishAndJoinThreadPool(
            bool allowIsolated = false,
            int dumpFlags = IServiceManager::DUMP_FLAG_PRIORITY_DEFAULT) {
        publish(allowIsolated, dumpFlags);
        joinThreadPool();
    }
​
    static void instantiate() { publish(); }
​
    static status_t shutdown() { return NO_ERROR; }
​
private:
    static void joinThreadPool() {
        sp<ProcessState> ps(ProcessState::self());
        ps->startThreadPool();
        ps->giveThreadPoolName();
        IPCThreadState::self()->joinThreadPool();
    }
};
​
​
}; // namespace android
// ---------------------------------------------------------------------------
#endif // ANDROID_BINDER_SERVICE_H

看到instantiate()方法中调用了publish()方法,而在publish()方法中,主要是获取了ServiceManager服务,然后将new好的service注册到ServiceManager中;

这一步,完成了AudioFlinger的启动和初始化工作,并将AudioFlinger注册到了ServiceManager,那么其他地方,就可以通过ServiceManager去获取这个服务。

new SERVICE()

scss 复制代码
AudioFlinger::AudioFlinger()
    : BnAudioFlinger(),
      mMediaLogNotifier(new AudioFlinger::MediaLogNotifier()),
      mPrimaryHardwareDev(NULL),
      mAudioHwDevs(NULL),
      mHardwareStatus(AUDIO_HW_IDLE),
      mMasterVolume(1.0f),
      mMasterMute(false),
      // mNextUniqueId(AUDIO_UNIQUE_ID_USE_MAX),
      mMode(AUDIO_MODE_INVALID),
      mBtNrecIsOff(false),
      mIsLowRamDevice(true),
      mIsDeviceTypeKnown(false),
      mTotalMemory(0),
      mClientSharedHeapSize(kMinimumClientSharedHeapSizeBytes),
      mGlobalEffectEnableTime(0),
      mSystemReady(false)
{
    // unsigned instead of audio_unique_id_use_t, because ++ operator is unavailable for enum
    for (unsigned use = AUDIO_UNIQUE_ID_USE_UNSPECIFIED; use < AUDIO_UNIQUE_ID_USE_MAX; use++) {
        // zero ID has a special meaning, so unavailable
        mNextUniqueIds[use] = AUDIO_UNIQUE_ID_USE_MAX;
    }
​
    getpid_cached = getpid();
    const bool doLog = property_get_bool("ro.test_harness", false);
    if (doLog) {
        mLogMemoryDealer = new MemoryDealer(kLogMemorySize, "LogWriters",
                MemoryHeapBase::READ_ONLY);
        (void) pthread_once(&sMediaLogOnce, sMediaLogInit);
    }
​
    // reset battery stats.
    // if the audio service has crashed, battery stats could be left
    // in bad state, reset the state upon service start.
    BatteryNotifier::getInstance().noteResetAudio();
​
    mDevicesFactoryHal = DevicesFactoryHalInterface::create();
    mEffectsFactoryHal = EffectsFactoryHalInterface::create();
​
    mMediaLogNotifier->run("MediaLogNotifier");
​
    ........................
}

在构造函数中其实做了3件事:

  • BatteryNotifier::getInstance().noteResetAudio():重置电池统计信息,如果音频服务崩溃,电池状态可能会处于不良状态,在服务启动时重置状态;
  • DevicesFactoryHalInterface::create():创建设备HAL层接口,用于hidl绑定;
  • EffectsFactoryHalInterface::create():创建音效HAL层接口,用于hidl绑定;
DevicesFactoryHalInterface::create()

/frameworks/av/media/libaudiohal/DevicesFactoryHalInterface.cpp

rust 复制代码
namespace android {
​
// static
sp<DevicesFactoryHalInterface> DevicesFactoryHalInterface::create() {
    if (hardware::audio::V4_0::IDevicesFactory::getService() != nullptr) {
        return new V4_0::DevicesFactoryHalHybrid();
    }
    if (hardware::audio::V2_0::IDevicesFactory::getService() != nullptr) {
        return new DevicesFactoryHalHybrid();
    }
    return nullptr;
}
​
}

根据hardware::audio的版本信息,创建对应的DevicesFactoryHalHybrid对象;我们暂时以V4_0为例,分析整个创建流程;

arduino 复制代码
namespace android {
namespace V4_0 {
​
DevicesFactoryHalHybrid::DevicesFactoryHalHybrid()
        : mLocalFactory(new DevicesFactoryHalLocal()),
          mHidlFactory(new DevicesFactoryHalHidl()) {
}
​
DevicesFactoryHalHybrid::~DevicesFactoryHalHybrid() {
}
​
status_t DevicesFactoryHalHybrid::openDevice(const char *name, sp<DeviceHalInterface> *device) {
    if (mHidlFactory != 0 && strcmp(AUDIO_HARDWARE_MODULE_ID_A2DP, name) != 0 &&
        strcmp(AUDIO_HARDWARE_MODULE_ID_HEARING_AID, name) != 0) {
        return mHidlFactory->openDevice(name, device);
    }
    return mLocalFactory->openDevice(name, device);
}
​
} // namespace V4_0
}

在DevicesFactoryHalHybrid的构造函数中,初始化了mLocalFactory和mHidlFactory,然后在该类中定义了一个openDevice()函数,然后通过一些条件判断来确定是调用mLocalFactory的openDevice()函数还是mHidlFactory的openDevice()函数;

AUDIO_HARDWARE_MODULE_ID_A2DP(蓝牙音频传输模型)和AUDIO_HARDWARE_MODULE_ID_HEARING_AID(助听器)这两个宏定义在/system/media/audio/include/system/audio.h中:

arduino 复制代码
// 主输出设备
#define AUDIO_HARDWARE_MODULE_ID_PRIMARY "primary"
// 蓝牙输出设备
#define AUDIO_HARDWARE_MODULE_ID_A2DP "a2dp"
// USB输出设备
#define AUDIO_HARDWARE_MODULE_ID_USB "usb"
#define AUDIO_HARDWARE_MODULE_ID_REMOTE_SUBMIX "r_submix"
#define AUDIO_HARDWARE_MODULE_ID_CODEC_OFFLOAD "codec_offload"
#define AUDIO_HARDWARE_MODULE_ID_STUB "stub"
#define AUDIO_HARDWARE_MODULE_ID_HEARING_AID "hearing_aid"

定义了音频硬件模块;

我们暂时不做复杂的判断,默认调用mLocalFactory.openDevice()函数;

EffectsFactoryHalInterface::create()

/frameworks/av/media/libaudiohal/EffectsFactoryHalInterface.cpp

rust 复制代码
namespace android {
​
// static
sp<EffectsFactoryHalInterface> EffectsFactoryHalInterface::create() {
    if (hardware::audio::effect::V4_0::IEffectsFactory::getService() != nullptr) {
        return new V4_0::EffectsFactoryHalHidl();
    }
    if (hardware::audio::effect::V2_0::IEffectsFactory::getService() != nullptr) {
        return new EffectsFactoryHalHidl();
    }
    return nullptr;
}
​
// static
bool EffectsFactoryHalInterface::isNullUuid(const effect_uuid_t *pEffectUuid) {
    return memcmp(pEffectUuid, EFFECT_UUID_NULL, sizeof(effect_uuid_t)) == 0;
}
​
} // namespace android

同理,也是根据audio的版本信息,创建对应的EffectsFactoryHalHidl对象;

onFirstRef()

ini 复制代码
void AudioFlinger::onFirstRef()
{
    Mutex::Autolock _l(mLock);
​
    /* TODO: move all this work into an Init() function */
    char val_str[PROPERTY_VALUE_MAX] = { 0 };
    if (property_get("ro.audio.flinger_standbytime_ms", val_str, NULL) >= 0) {
        uint32_t int_val;
        if (1 == sscanf(val_str, "%u", &int_val)) {
            mStandbyTimeInNsecs = milliseconds(int_val);
            ALOGI("Using %u mSec as standby time.", int_val);
        } else {
            mStandbyTimeInNsecs = kDefaultStandbyTimeInNsecs;
            ALOGI("Using default %u mSec as standby time.",
                    (uint32_t)(mStandbyTimeInNsecs / 1000000));
        }
    }
​
    mPatchPanel = new PatchPanel(this);
​
    mMode = AUDIO_MODE_NORMAL;
​
    gAudioFlinger = this;
}

其中读取了属性"ro.audio.flinger_standbytime_ms"的值并赋值给mStandbyTimeInNsecs(待机时间),若这个属性没有设置,则使用默认值kDefaultStandbyTimeInNsecs。然后根据AudioFlinger自身创建对应的PatchPanel对象,并将系统的音频模式设置为AUDIO_MODE_NORMAL。

AudioPolicyService

AudioPolicyService.instantiate()的方法基本上和AudioFlinger的逻辑一致,会在instantiate()函数中注册一个media.audio_policy的服务,在注册服务过程中会调用AudioPolicyService::onFirstRef()函数;

所以会直接分析AudioPolicyService的构造函数和onFirstRef()函数:

/frameworks/av/services/audiopolicy/service/AudioPolicyService.cpp

scss 复制代码
AudioPolicyService::AudioPolicyService()
    : BnAudioPolicyService(), mpAudioPolicyDev(NULL), mpAudioPolicy(NULL),
      mAudioPolicyManager(NULL), mAudioPolicyClient(NULL), mPhoneState(AUDIO_MODE_INVALID)
{
}
​
void AudioPolicyService::onFirstRef()
{
    {
        Mutex::Autolock _l(mLock);
​
        // start tone playback thread
        mTonePlaybackThread = new AudioCommandThread(String8("ApmTone"), this);
        // start audio commands thread
        mAudioCommandThread = new AudioCommandThread(String8("ApmAudio"), this);
        // start output activity command thread
        mOutputCommandThread = new AudioCommandThread(String8("ApmOutput"), this);
​
        mAudioPolicyClient = new AudioPolicyClient(this);
        mAudioPolicyManager = createAudioPolicyManager(mAudioPolicyClient);
    }
    // load audio processing modules
    sp<AudioPolicyEffects>audioPolicyEffects = new AudioPolicyEffects();
    {
        Mutex::Autolock _l(mLock);
        mAudioPolicyEffects = audioPolicyEffects;
    }
​
    mUidPolicy = new UidPolicy(this);
    mUidPolicy->registerSelf();
}

我们直接分析onFirstRef()函数。在该函数中,主要工作如下:

  1. 创建3个AudioCommandThread,名字分别为"ApmTone"、"ApmAudio"、"ApmOutput";
  2. 实例化AudioPolicyClient对象;
  3. 初始化AudioPolicyManager,传参就是AudioPolicyClient对象;
  4. AudioPolicyEffects音效初始化;

创建AudioCommandThread

我们首先看一下AudioCommandThread的构造函数和onFirstRef()函数:

scss 复制代码
AudioPolicyService::AudioCommandThread::AudioCommandThread(String8 name,
                                                           const wp<AudioPolicyService>& service)
    : Thread(false), mName(name), mService(service)
{
    mpToneGenerator = NULL;
}
​
​
AudioPolicyService::AudioCommandThread::~AudioCommandThread()
{
    if (!mAudioCommands.isEmpty()) {
        release_wake_lock(mName.string());
    }
    mAudioCommands.clear();
    delete mpToneGenerator;
}
​
// 在AudioPolicyService对象构造函数中,创建了AudioCommandThread对象,在第一次强引用AudioCommandThread线程对象时,onFirstRef函数会被调用
void AudioPolicyService::AudioCommandThread::onFirstRef()
{
    run(mName.string(), ANDROID_PRIORITY_AUDIO);
}
​
bool AudioPolicyService::AudioCommandThread::threadLoop()
{
    nsecs_t waitTime = -1;
​
    mLock.lock();
    while (!exitPending())
    {
        sp<AudioPolicyService> svc;
        while (!mAudioCommands.isEmpty() && !exitPending()) {
            nsecs_t curTime = systemTime();
            // commands are sorted by increasing time stamp: execute them from index 0 and up
            if (mAudioCommands[0]->mTime <= curTime) {
                sp<AudioCommand> command = mAudioCommands[0];
                mAudioCommands.removeAt(0);
                mLastCommand = command;
​
                switch (command->mCommand) {
                case START_TONE: {
                    mLock.unlock();
                    ToneData *data = (ToneData *)command->mParam.get();
                    ALOGV("AudioCommandThread() processing start tone %d on stream %d",
                            data->mType, data->mStream);
                    delete mpToneGenerator;
                    mpToneGenerator = new ToneGenerator(data->mStream, 1.0);
                    mpToneGenerator->startTone(data->mType);
                    mLock.lock();
                    }break;
                case STOP_TONE: {
                    mLock.unlock();
                    ALOGV("AudioCommandThread() processing stop tone");
                    if (mpToneGenerator != NULL) {
                        mpToneGenerator->stopTone();
                        delete mpToneGenerator;
                        mpToneGenerator = NULL;
                    }
                    mLock.lock();
                    }break;
                        
                ..............................
                    
                case RECORDING_CONFIGURATION_UPDATE: {
                    RecordingConfigurationUpdateData *data =
                            (RecordingConfigurationUpdateData *)command->mParam.get();
                    ALOGV("AudioCommandThread() processing recording configuration update");
                    svc = mService.promote();
                    if (svc == 0) {
                        break;
                    }
                    mLock.unlock();
                    svc->doOnRecordingConfigurationUpdate(data->mEvent, &data->mClientInfo,
                            &data->mClientConfig, &data->mDeviceConfig,
                            data->mPatchHandle);
                    mLock.lock();
                    } break;
                default:
                    ALOGW("AudioCommandThread() unknown command %d", command->mCommand);
                }
                {
                    Mutex::Autolock _l(command->mLock);
                    if (command->mWaitStatus) {
                        command->mWaitStatus = false;
                        command->mCond.signal();
                    }
                }
                waitTime = -1;
                // release mLock before releasing strong reference on the service as
                // AudioPolicyService destructor calls AudioCommandThread::exit() which
                // acquires mLock.
                mLock.unlock();
                svc.clear();
                mLock.lock();
            } else {
                waitTime = mAudioCommands[0]->mTime - curTime;
                break;
            }
        }
​
        // release delayed commands wake lock if the queue is empty
        if (mAudioCommands.isEmpty()) {
            release_wake_lock(mName.string());
        }
​
        // At this stage we have either an empty command queue or the first command in the queue
        // has a finite delay. So unless we are exiting it is safe to wait.
        if (!exitPending()) {
            ALOGV("AudioCommandThread() going to sleep");
            if (waitTime == -1) {
                mWaitWorkCV.wait(mLock);
            } else {
                mWaitWorkCV.waitRelative(mLock, waitTime);
            }
        }
    }
    // release delayed commands wake lock before quitting
    if (!mAudioCommands.isEmpty()) {
        release_wake_lock(mName.string());
    }
    mLock.unlock();
    return false;
}

在创建AudioCommandThread对象的时候,传入了对应的CommandThread的name赋值给mName,然后在onFirstRef()函数中执行了run()函数,启动了threadLoop()线程体,在该线程体中,通过2个while循环控制,监听Command的请求,根据request,执行对应的逻辑。其中包括了音量控制、声音设备的选择和切换等;

实例化AudioPolicyClient

3个AudioCommandThread创建成功之后,紧接着调用了new AudioPolicyClient(this),AudioPolicyClient定义在AudioPolicyService.h中:

/frameworks/av/services/audiopolicy/service/AudioPolicyService.h

arduino 复制代码
class AudioPolicyClient : public AudioPolicyClientInterface
{
    public:
    explicit AudioPolicyClient(AudioPolicyService *service) : mAudioPolicyService(service) {}
    virtual ~AudioPolicyClient() {}

AudioPolicyClient继承了AudioPolicyClientInterface,AudioPolicyClient中传入了一个AudioPolicyService的指针,即传入的this,AudioPolicyClient中定义了许多抽象方法,AudioPolicyClient的实现类为AudioPolicyClientImpl,而在AudioPolicyClientImpl的函数实现中,都是通过调用了AudioFlinger的逻辑实现的;

初始化AudioPolicyManager

AudioPolicyClient对象创建成功之后,就是AudioPolicyManager的初始化;

ini 复制代码
mAudioPolicyManager = createAudioPolicyManager(mAudioPolicyClient);

createAudioPolicyManager()函数其实是在AudioPolicyInterface.h中:

arduino 复制代码
extern "C" AudioPolicyInterface* createAudioPolicyManager(AudioPolicyClientInterface *clientInterface);
extern "C" void destroyAudioPolicyManager(AudioPolicyInterface *interface);

而AudioPolicyInterface.h中的createAudioPolicyManager()方法是在AudioPolicyFactory.cpp中实现的:

typescript 复制代码
namespace android {
​
extern "C" AudioPolicyInterface* createAudioPolicyManager(
        AudioPolicyClientInterface *clientInterface)
{
    return new AudioPolicyManager(clientInterface);
}
​
extern "C" void destroyAudioPolicyManager(AudioPolicyInterface *interface)
{
    delete interface;
}
​
}

在该方法中创建了AudioPolicyManager对象,其中创建了刚刚创建好的AudioPolicyClient对象;

scss 复制代码
AudioPolicyManager::AudioPolicyManager(AudioPolicyClientInterface *clientInterface,
                                       bool /*forTesting*/)
    :
    mUidCached(getuid()),
    mpClientInterface(clientInterface),
    mLimitRingtoneVolume(false), mLastVoiceVolume(-1.0f),
    mA2dpSuspended(false),
#ifdef USE_XML_AUDIO_POLICY_CONF
    /* 设置声音曲线 */
    mVolumeCurves(new VolumeCurvesCollection()),
    mConfig(mHwModulesAll, mAvailableOutputDevices, mAvailableInputDevices,
            mDefaultOutputDevice, static_cast<VolumeCurvesCollection*>(mVolumeCurves.get())),
#else
    mVolumeCurves(new StreamDescriptorCollection()),
    mConfig(mHwModulesAll, mAvailableOutputDevices, mAvailableInputDevices,
            mDefaultOutputDevice),
#endif
    mAudioPortGeneration(1),
    mBeaconMuteRefCount(0),
    mBeaconPlayingRefCount(0),
    mBeaconMuted(false),
    mTtsOutputAvailable(false),
    mMasterMono(false),
    mMusicEffectOutput(AUDIO_IO_HANDLE_NONE),
    mHasComputedSoundTriggerSupportsConcurrentCapture(false)
{
}
​
AudioPolicyManager::AudioPolicyManager(AudioPolicyClientInterface *clientInterface)
        : AudioPolicyManager(clientInterface, false /*forTesting*/)
{
    loadConfig();
    initialize();
}

在AudioPolicyManager中初始化了很多变量,然后在该构造函数中调用了loadConfig()和initialize()函数;

注意,在AudioPolicyManager中实际上调用了很多AudioPolicyClient类中定义的函数,其最终就调用到了AudioFlinger中;

loadConfig()
less 复制代码
void AudioPolicyManager::loadConfig() {
#ifdef USE_XML_AUDIO_POLICY_CONF
    // 解析文件 audio_policy_configuration.xml
    if (deserializeAudioPolicyXmlConfig(getConfig()) != NO_ERROR) {
#else
    // #define AUDIO_POLICY_CONFIG_FILE "/system/etc/audio_policy.conf"
    // #define AUDIO_POLICY_VENDOR_CONFIG_FILE "/vendor/etc/audio_policy.conf"
    if ((ConfigParsingUtils::loadConfig(AUDIO_POLICY_VENDOR_CONFIG_FILE, getConfig()) != NO_ERROR)
           && (ConfigParsingUtils::loadConfig(AUDIO_POLICY_CONFIG_FILE, getConfig()) != NO_ERROR)) {
#endif
        ALOGE("could not load audio policy configuration file, setting defaults");
        getConfig().setDefault();
    }
}

在该函数中,主要是用于加载音频策略配置文件,其中通过USE_XML_AUDIO_POLICY_CONF宏来判断使用哪一种判断条件组合,一种是xml格式的配置,一种是config格式的配置。

其中还调用了getConfig()函数,getConfig()函数定义在AudioPolicyManager.h头文件中:

javascript 复制代码
AudioPolicyConfig& getConfig() { return mConfig; }
​
AudioPolicyConfig mConfig;

mConfig的类型为AudioPolicyConfig,mConfig的赋值在AudioPolicyManager.cpp的构造函数中执行的;

arduino 复制代码
AudioPolicyManager::AudioPolicyManager(AudioPolicyClientInterface *clientInterface,
                                       bool /*forTesting*/)
   ........................
#ifdef USE_XML_AUDIO_POLICY_CONF
    mVolumeCurves(new VolumeCurvesCollection()),
    mConfig(mHwModulesAll, mAvailableOutputDevices, mAvailableInputDevices,
            mDefaultOutputDevice, static_cast<VolumeCurvesCollection*>(mVolumeCurves.get())),
#else
    mVolumeCurves(new StreamDescriptorCollection()),
    mConfig(mHwModulesAll, mAvailableOutputDevices, mAvailableInputDevices,
            mDefaultOutputDevice),
#endif
    ........................
{
}

其实在loadConfig()函数中,最终的目的就是对mConfig变量进行初始化,有3种更新的方式:

  1. deserializeAudioPolicyXmlConfig()
  2. ConfigParsingUtils::loadConfig()
  3. getConfig().setDefault()
deserializeAudioPolicyXmlConfig()
arduino 复制代码
static status_t deserializeAudioPolicyXmlConfig(AudioPolicyConfig &config) {
    // #define AUDIO_POLICY_XML_CONFIG_FILE_PATH_MAX_LENGTH 128
    char audioPolicyXmlConfigFile[AUDIO_POLICY_XML_CONFIG_FILE_PATH_MAX_LENGTH];
    std::vector<const char*> fileNames;
    status_t ret;
​
    if (property_get_bool("ro.bluetooth.a2dp_offload.supported", false) &&
        property_get_bool("persist.bluetooth.a2dp_offload.disabled", false)) {
        // A2DP offload supported but disabled: try to use special XML file
        fileNames.push_back(AUDIO_POLICY_A2DP_OFFLOAD_DISABLED_XML_CONFIG_FILE_NAME);
    }
    // #define AUDIO_POLICY_XML_CONFIG_FILE_NAME "audio_policy_configuration.xml"
    fileNames.push_back(AUDIO_POLICY_XML_CONFIG_FILE_NAME);
​
    for (const char* fileName : fileNames) {
        for (int i = 0; i < kConfigLocationListSize; i++) {
            PolicySerializer serializer;
            snprintf(audioPolicyXmlConfigFile, sizeof(audioPolicyXmlConfigFile),
                     "%s/%s", kConfigLocationList[i], fileName);
            ret = serializer.deserialize(audioPolicyXmlConfigFile, config);
            if (ret == NO_ERROR) {
                return ret;
            }
        }
    }
    return ret;
}

在该方法中调用了PolicySerializer的serializer()函数,解析了指定的xml文件,并将xml文件中的信息设置到config中;

ConfigParsingUtils::loadConfig()
scss 复制代码
status_t ConfigParsingUtils::loadConfig(const char *path, AudioPolicyConfig &config)
{
    cnode *root;
    char *data;
​
    data = (char *)load_file(path, NULL);
    if (data == NULL) {
        return -ENODEV;
    }
    root = config_node("", "");
    config_load(root, data);
​
    HwModuleCollection hwModules;
    loadHwModules(root, hwModules, config);
​
    // legacy audio_policy.conf files have one global_configuration section, attached to primary.
    loadGlobalConfig(root, config, hwModules.getModuleFromName(AUDIO_HARDWARE_MODULE_ID_PRIMARY));
​
    config.setHwModules(hwModules);
​
    config_free(root);
    free(root);
    free(data);
​
    ALOGI("loadAudioPolicyConfig() loaded %s\n", path);
​
    return NO_ERROR;
}

其实也是对config进行配置更新;

getConfig().setDefault()
ini 复制代码
void setDefault(void)
{
    ..............................
    mAvailableOutputDevices.add(mDefaultOutputDevices);
    mAvailableInputDevices.add(defaultInputDevice);
​
    module = new HwModule(AUDIO_HARDWARE_MODULE_ID_PRIMARY);
​
    sp<OutputProfile> outProfile;
    outProfile = new OutputProfile(String8("primary"));
    ..............................
    module->addOutputProfile(outProfile);
​
    sp<InputProfile> inProfile;
    inProfile = new InputProfile(String8("primary"));
    ..............................
    module->addInputProfile(inProfile);
​
    mHwModules.add(module);
}

同理;

至此,AudioPolicyConfig类型的config对象就配置完成了;

initialize()
scss 复制代码
status_t AudioPolicyManager::initialize() {
    // 初始化各种音频流对应的音量调节点
    mVolumeCurves->initializeVolumeCurves(getConfig().isSpeakerDrcEnabled());
​
    // Once policy config has been parsed, retrieve an instance of the engine and initialize it.
    audio_policy::EngineInstance *engineInstance = audio_policy::EngineInstance::getInstance();
    if (!engineInstance) {
        ALOGE("%s:  Could not get an instance of policy engine", __FUNCTION__);
        return NO_INIT;
    }
    // Retrieve the Policy Manager Interface
    mEngine = engineInstance->queryInterface<AudioPolicyManagerInterface>();
    if (mEngine == NULL) {
        ALOGE("%s: Failed to get Policy Engine Interface", __FUNCTION__);
        return NO_INIT;
    }
    mEngine->setObserver(this);
    status_t status = mEngine->initCheck();
    if (status != NO_ERROR) {
        LOG_FATAL("Policy engine not initialized(err=%d)", status);
        return status;
    }
​
    // mAvailableOutputDevices and mAvailableInputDevices now contain all attached devices
    // open all output streams needed to access attached devices
    audio_devices_t outputDeviceTypes = mAvailableOutputDevices.types();
    audio_devices_t inputDeviceTypes = mAvailableInputDevices.types() & ~AUDIO_DEVICE_BIT_IN;
    for (const auto& hwModule : mHwModulesAll) {
        // 加载audio policy硬件抽象库
        hwModule->setHandle(mpClientInterface->loadHwModule(hwModule->getName()));
        if (hwModule->getHandle() == AUDIO_MODULE_HANDLE_NONE) {
            ALOGW("could not open HW module %s", hwModule->getName());
            continue;
        }
        mHwModules.push_back(hwModule);
        // open all output streams needed to access attached devices
        // except for direct output streams that are only opened when they are actually
        // required by an app.
        // This also validates mAvailableOutputDevices list
        //打开所有需要访问附加设备的输出流,除了应用程序需要的直接输出流。
        //这也验证mAvailableOutputDevices列表
        for (const auto& outProfile : hwModule->getOutputProfiles()) {
            if (!outProfile->canOpenNewIo()) {
                ALOGE("Invalid Output profile max open count %u for profile %s",
                      outProfile->maxOpenCount, outProfile->getTagName().c_str());
                continue;
            }
            if (!outProfile->hasSupportedDevices()) {
                ALOGW("Output profile contains no device on module %s", hwModule->getName());
                continue;
            }
            if ((outProfile->getFlags() & AUDIO_OUTPUT_FLAG_TTS) != 0) {
                mTtsOutputAvailable = true;
            }
​
            if ((outProfile->getFlags() & AUDIO_OUTPUT_FLAG_DIRECT) != 0) {
                continue;
            }
            audio_devices_t profileType = outProfile->getSupportedDevicesType();
            if ((profileType & mDefaultOutputDevice->type()) != AUDIO_DEVICE_NONE) {
                profileType = mDefaultOutputDevice->type();
            } else {
                // chose first device present in profile's SupportedDevices also part of
                // outputDeviceTypes
                profileType = outProfile->getSupportedDeviceForType(outputDeviceTypes);
            }
            if ((profileType & outputDeviceTypes) == 0) {
                continue;
            }
            sp<SwAudioOutputDescriptor> outputDesc = new SwAudioOutputDescriptor(outProfile,
                                                                                 mpClientInterface);
            const DeviceVector &supportedDevices = outProfile->getSupportedDevices();
            const DeviceVector &devicesForType = supportedDevices.getDevicesFromType(profileType);
            String8 address = devicesForType.size() > 0 ? devicesForType.itemAt(0)->mAddress
                    : String8("");
            // 当打开输出流设备及创建PlaybackThread时,系统会分配一个全局唯一的值作为audio_io_handle_t,并把audio_io_handle_t和PlaybackThread添加到键值对向量mPlaybackThreads中,是一一对应的关系
            audio_io_handle_t output = AUDIO_IO_HANDLE_NONE;
            // 打开输出设备
            status_t status = outputDesc->open(nullptr, profileType, address,
                                           AUDIO_STREAM_DEFAULT, AUDIO_OUTPUT_FLAG_NONE, &output);
​
            if (status != NO_ERROR) {
                ALOGW("Cannot open output stream for device %08x on hw module %s",
                      outputDesc->mDevice,
                      hwModule->getName());
            } else {
                for (const auto& dev : supportedDevices) {
                    ssize_t index = mAvailableOutputDevices.indexOf(dev);
                    // give a valid ID to an attached device once confirmed it is reachable
                    if (index >= 0 && !mAvailableOutputDevices[index]->isAttached()) {
                        mAvailableOutputDevices[index]->attach(hwModule);
                    }
                }
                if (mPrimaryOutput == 0 &&
                        outProfile->getFlags() & AUDIO_OUTPUT_FLAG_PRIMARY) {
                    mPrimaryOutput = outputDesc;
                }
                // 保存输出设备描述符对象
                addOutput(output, outputDesc);
                // 设置输出设备
                setOutputDevice(outputDesc,
                                profileType,
                                true,
                                0,
                                NULL,
                                address);
            }
        }
        // open input streams needed to access attached devices to validate
        // mAvailableInputDevices list
        for (const auto& inProfile : hwModule->getInputProfiles()) {
            if (!inProfile->canOpenNewIo()) {
                ALOGE("Invalid Input profile max open count %u for profile %s",
                      inProfile->maxOpenCount, inProfile->getTagName().c_str());
                continue;
            }
            if (!inProfile->hasSupportedDevices()) {
                ALOGW("Input profile contains no device on module %s", hwModule->getName());
                continue;
            }
            // chose first device present in profile's SupportedDevices also part of
            // inputDeviceTypes
            audio_devices_t profileType = inProfile->getSupportedDeviceForType(inputDeviceTypes);
​
            if ((profileType & inputDeviceTypes) == 0) {
                continue;
            }
            sp<AudioInputDescriptor> inputDesc =
                    new AudioInputDescriptor(inProfile, mpClientInterface);
​
            DeviceVector inputDevices = mAvailableInputDevices.getDevicesFromType(profileType);
            //   the inputs vector must be of size >= 1, but we don't want to crash here
            String8 address = inputDevices.size() > 0 ? inputDevices.itemAt(0)->mAddress
                    : String8("");
            ALOGV("  for input device 0x%x using address %s", profileType, address.string());
            ALOGE_IF(inputDevices.size() == 0, "Input device list is empty!");
​
            audio_io_handle_t input = AUDIO_IO_HANDLE_NONE;
            // 打开输入设备
            status_t status = inputDesc->open(nullptr,
                                              profileType,
                                              address,
                                              AUDIO_SOURCE_MIC,
                                              AUDIO_INPUT_FLAG_NONE,
                                              &input);
​
            if (status == NO_ERROR) {
                for (const auto& dev : inProfile->getSupportedDevices()) {
                    ssize_t index = mAvailableInputDevices.indexOf(dev);
                    // give a valid ID to an attached device once confirmed it is reachable
                    if (index >= 0) {
                        sp<DeviceDescriptor> devDesc = mAvailableInputDevices[index];
                        if (!devDesc->isAttached()) {
                            devDesc->attach(hwModule);
                            devDesc->importAudioPort(inProfile, true);
                        }
                    }
                }
                inputDesc->close();
            } else {
                ALOGW("Cannot open input stream for device %08x on hw module %s",
                      profileType,
                      hwModule->getName());
            }
        }
    }
    // make sure all attached devices have been allocated a unique ID
    for (size_t i = 0; i  < mAvailableOutputDevices.size();) {
        if (!mAvailableOutputDevices[i]->isAttached()) {
            ALOGW("Output device %08x unreachable", mAvailableOutputDevices[i]->type());
            mAvailableOutputDevices.remove(mAvailableOutputDevices[i]);
            continue;
        }
        // The device is now validated and can be appended to the available devices of the engine
        mEngine->setDeviceConnectionState(mAvailableOutputDevices[i],
                                          AUDIO_POLICY_DEVICE_STATE_AVAILABLE);
        i++;
    }
    for (size_t i = 0; i  < mAvailableInputDevices.size();) {
        if (!mAvailableInputDevices[i]->isAttached()) {
            ALOGW("Input device %08x unreachable", mAvailableInputDevices[i]->type());
            mAvailableInputDevices.remove(mAvailableInputDevices[i]);
            continue;
        }
        // The device is now validated and can be appended to the available devices of the engine
        mEngine->setDeviceConnectionState(mAvailableInputDevices[i],
                                          AUDIO_POLICY_DEVICE_STATE_AVAILABLE);
        i++;
    }
    // make sure default device is reachable
    if (mDefaultOutputDevice == 0 || mAvailableOutputDevices.indexOf(mDefaultOutputDevice) < 0) {
        ALOGE("Default device %08x is unreachable", mDefaultOutputDevice->type());
        status = NO_INIT;
    }
    // If microphones address is empty, set it according to device type
    for (size_t i = 0; i  < mAvailableInputDevices.size(); i++) {
        if (mAvailableInputDevices[i]->mAddress.isEmpty()) {
            if (mAvailableInputDevices[i]->type() == AUDIO_DEVICE_IN_BUILTIN_MIC) {
                mAvailableInputDevices[i]->mAddress = String8(AUDIO_BOTTOM_MICROPHONE_ADDRESS);
            } else if (mAvailableInputDevices[i]->type() == AUDIO_DEVICE_IN_BACK_MIC) {
                mAvailableInputDevices[i]->mAddress = String8(AUDIO_BACK_MICROPHONE_ADDRESS);
            }
        }
    }
​
    if (mPrimaryOutput == 0) {
        ALOGE("Failed to open primary output");
        status = NO_INIT;
    }
​
    // 更新输出设备
    updateDevicesAndOutputs();
    return status;
}

在这个函数中总共执行了7件事:

  1. 初始化各种音频流对应的音量调节点;
  2. 加载audio policy硬件抽象库;
  3. 打开输出设备;
  4. 保存输出设备描述符对象;
  5. 设置输出设备;
  6. 打开输入设备;
  7. 更新输出设备;
初始化各种音频流对应的音量调节点
scss 复制代码
mVolumeCurves->initializeVolumeCurves(getConfig().isSpeakerDrcEnabled());

initializeVolumeCurves()函数定义在StreamDescriptor.cpp中;

scss 复制代码
void StreamDescriptorCollection::initializeVolumeCurves(bool isSpeakerDrcEnabled)
{
    for (int i = 0; i < AUDIO_STREAM_CNT; i++) {
        for (int j = 0; j < DEVICE_CATEGORY_CNT; j++) {
            setVolumeCurvePoint(static_cast<audio_stream_type_t>(i),
                                static_cast<device_category>(j),
                                Gains::sVolumeProfiles[i][j]);
        }
    }
​
    // Check availability of DRC on speaker path: if available, override some of the speaker curves
    // 检查扬声器路径上DRC的可用性:如果可用,覆盖一些扬声器曲线
    if (isSpeakerDrcEnabled) {
        ........................
    }
}

在该函数中涉及到了DRC的概念,DRC为动态范围控制,当输出的音频信号不是很大的时候,系统会按照原来的设定输出,但是当输出的音频信号过大的时候,为了保护喇叭DRC会将输出信号的幅度进行压缩将其限制在一个范围内。因为输出的音频信号过大会引起削峰,从而引起音频失真,并且损坏喇叭,所以需要有DRC的作用来将输出限制在一定的范围内。在信号很小的时候DRC是不会起作用的,只有当输出信号的功率超过了你设定的DRC门限的时候DRC才会工作。

首先在该函数中会对AUDIO_STREAM_CNT和DEVICE_CATEGORY进行遍历,为所有的音频流类型设置对应的音量调节档位;

同时也会判断是否支持DRC,如果支持的话,会为各种音频流类型设置对应的音量调节档位;

arduino 复制代码
void StreamDescriptorCollection::setVolumeCurvePoint(audio_stream_type_t stream,
                                                     device_category deviceCategory,
                                                     const VolumeCurvePoint *point)
{
    editValueAt(stream).setVolumeCurvePoint(deviceCategory, point);
}
arduino 复制代码
void StreamDescriptor::setVolumeCurvePoint(device_category deviceCategory,
                                           const VolumeCurvePoint *point)
{
    mVolumeCurve[deviceCategory] = point;
}
加载audio policy硬件抽象库
ini 复制代码
audio_devices_t outputDeviceTypes = mAvailableOutputDevices.types();
audio_devices_t inputDeviceTypes = mAvailableInputDevices.types() & ~AUDIO_DEVICE_BIT_IN;
for (const auto& hwModule : mHwModulesAll) {
    // 加载audio policy硬件抽象库
    hwModule->setHandle(mpClientInterface->loadHwModule(hwModule->getName()));
    if (hwModule->getHandle() == AUDIO_MODULE_HANDLE_NONE) {
        ALOGW("could not open HW module %s", hwModule->getName());
        continue;
    }
    mHwModules.push_back(hwModule);

在该逻辑中,对mHwModulesAll变量进行遍历,在这一块逻辑中,涉及到了mAvailableOutputDevices、mAvailableInputDevices以及mHwModulesAll这3个变量,我们需要定位一下这3个变量的定义、赋值位置以及对应的含义:

ini 复制代码
DeviceVector  mAvailableOutputDevices; // all available output devices
DeviceVector  mAvailableInputDevices;  // all available input devices
​
SessionRouteMap mOutputRoutes = SessionRouteMap(SessionRouteMap::MAPTYPE_OUTPUT);
SessionRouteMap mInputRoutes = SessionRouteMap(SessionRouteMap::MAPTYPE_INPUT);
​
bool    mLimitRingtoneVolume;        // limit ringtone volume to music volume if headset connected
audio_devices_t mDeviceForStrategy[NUM_STRATEGIES];
float   mLastVoiceVolume;            // last voice volume value sent to audio HAL
bool    mA2dpSuspended;  // true if A2DP output is suspended
​
std::unique_ptr<IVolumeCurvesCollection> mVolumeCurves; // Volume Curves per use case and device category
EffectDescriptorCollection mEffects;  // list of registered audio effects
sp<DeviceDescriptor> mDefaultOutputDevice; // output device selected by default at boot time
HwModuleCollection mHwModules; // contains only modules that have been loaded successfully
HwModuleCollection mHwModulesAll; // normally not needed, used during construction and for

mAvailableOutputDevices、mAvailableInputDevices分别代表了所有可用的输出设备和可用的输入设备;

mHwModulesAll代表所有的硬件模块;

我们看一下这3个变量的传递逻辑:

scss 复制代码
mConfig(mHwModulesAll, mAvailableOutputDevices, mAvailableInputDevices,
        mDefaultOutputDevice),

在AudioPolicyManager的构造函数中执行了如上逻辑,将这3个变量传入了mConfig中,而我们知道,mConfig的类型为AudioPolicyConfig,在AudioPolicyConfig的构造函数中:

css 复制代码
public:
    AudioPolicyConfig(HwModuleCollection &hwModules,
                      DeviceVector &availableOutputDevices,
                      DeviceVector &availableInputDevices,
                      sp<DeviceDescriptor> &defaultOutputDevices,
                      VolumeCurvesCollection *volumes = nullptr)
        : mHwModules(hwModules),
          mAvailableOutputDevices(availableOutputDevices),
          mAvailableInputDevices(availableInputDevices),
          mDefaultOutputDevices(defaultOutputDevices),
          mVolumeCurves(volumes),
          mIsSpeakerDrcEnabled(false)
    {}

将这3个对象分别赋值给mConfig对应的对象,用于保存,其中,传入的mHwModulesAll对象被赋值给了mHwModules对象;

这3个对象的赋值逻辑在加载AudioPolicy Config的时候,就赋值了,我们以setDefault()的load方式为例看一下这3个对象的赋值逻辑:

ini 复制代码
void setDefault(void)
{
    mDefaultOutputDevices = new DeviceDescriptor(AUDIO_DEVICE_OUT_SPEAKER);
    sp<HwModule> module;
    sp<DeviceDescriptor> defaultInputDevice = new DeviceDescriptor(AUDIO_DEVICE_IN_BUILTIN_MIC);
    mAvailableOutputDevices.add(mDefaultOutputDevices);
    mAvailableInputDevices.add(defaultInputDevice);
​
    module = new HwModule(AUDIO_HARDWARE_MODULE_ID_PRIMARY);
​
    ........................
    module->addOutputProfile(outProfile);
​
    ........................
    module->addInputProfile(inProfile);
​
    mHwModules.add(module);
}

在该函数中,对mAvailableOutputDevices和mAvailableInputDevices以及mHwModules集合进行了数据的填充;

mAvailableOutputDevices

ini 复制代码
mDefaultOutputDevices = new DeviceDescriptor(AUDIO_DEVICE_OUT_SPEAKER);

通过new DeviceDescriptor创建了对应AUDIO_DEVICE_OUT_SPEAKER的音频输出设备,传入的参数为AUDIO_DEVICE_OUT_SPEAKER:

/frameworks/av/services/audiopolicy/common/managerdefinitions/src/DeviceDescriptor.cpp

scss 复制代码
DeviceDescriptor::DeviceDescriptor(audio_devices_t type, const String8 &tagName) :
    AudioPort(String8(""), AUDIO_PORT_TYPE_DEVICE,
              audio_is_output_device(type) ? AUDIO_PORT_ROLE_SINK :
                                             AUDIO_PORT_ROLE_SOURCE),
    mAddress(""), mTagName(tagName), mDeviceType(type), mId(0)
{
    if (type == AUDIO_DEVICE_IN_REMOTE_SUBMIX || type == AUDIO_DEVICE_OUT_REMOTE_SUBMIX ) {
        mAddress = String8("0");
    }
}

AUDIO_DEVICE_OUT_SPEAKER指定了设备类型为音频扬声器,还有其他类型,我们常见的有蓝牙输出设备等;

然后将创建好的mDefaultOutputDevices对象添加到mAvailableOutputDevices中;

mAvailableInputDevices

同理,创建了DeviceDescriptor对象,其中传入的参数为AUDIO_DEVICE_IN_BUILTIN_MIC,代表mic输入设备;

然后将创建好的defaultInputDevice对象添加到mAvailableInputDevices中;

mHwModulesAll

mHwModulesAl

scss 复制代码
module = new HwModule(AUDIO_HARDWARE_MODULE_ID_PRIMARY);
​
module->addOutputProfile(outProfile);
module->addInputProfile(inProfile);
​
mHwModules.add(module);

根据传入的module类型,创建对应的硬件module,然后使用上述刚刚创建的output DeviceDescriptor和input DeviceDescriptor生成对应的OutputProfile和InputProfile,然后将这两个Profile配置到module,指定了module的输出和输入,然后将配置好的module添加到mHwModulesAll中;

上述3个对象创建成功之后,就看一下加载audio policy硬件的for循环:

ini 复制代码
audio_devices_t outputDeviceTypes = mAvailableOutputDevices.types();
audio_devices_t inputDeviceTypes = mAvailableInputDevices.types() & ~AUDIO_DEVICE_BIT_IN;
for (const auto& hwModule : mHwModulesAll) {
    // 加载audio policy硬件抽象库
    hwModule->setHandle(mpClientInterface->loadHwModule(hwModule->getName()));
    if (hwModule->getHandle() == AUDIO_MODULE_HANDLE_NONE) {
        ALOGW("could not open HW module %s", hwModule->getName());
        continue;
    }
    mHwModules.push_back(hwModule);

通过mAvailableOutputDevices和mAvailableInputDevices获取到audio_devices_t类型的outputDevice和inputDevice的type;

然后遍历mHwModules集合,获取其中的音频硬件设备,调用mpClientInterface->loadHwModule()函数加载了指定的音频硬件module,我们看一下在loadHwModule()函数中进行了哪些操作;

mpClientInterface->loadHwModule()

mpClientInterface对象的类型为AudioPolicyClient,AudioPolicyClient继承自AudioPolicyClientInterface,AudioPolicyClient的实现类为AudioPolicyClientImpl.cpp,所以我们看一下loadHwModule()函数在AudioPolicyClientImpl中的实现逻辑:

rust 复制代码
audio_module_handle_t AudioPolicyService::AudioPolicyClient::loadHwModule(const char *name)
{
    sp<IAudioFlinger> af = AudioSystem::get_audio_flinger();
    if (af == 0) {
        ALOGW("%s: could not get AudioFlinger", __func__);
        return AUDIO_MODULE_HANDLE_NONE;
    }
​
    return af->loadHwModule(name);
}

调用了AudioFlinger的loadHwModule()函数,将传入的name参数一并带入;

印在AudioFlinger的创建和初始化先于AudioPolicyService,所以通过AudioSystem::get_audio_flinger()获取到了AudioFlinger对象;

ini 复制代码
// loadHwModule_l() must be called with AudioFlinger::mLock held
audio_module_handle_t AudioFlinger::loadHwModule_l(const char *name)
{
    for (size_t i = 0; i < mAudioHwDevs.size(); i++) {
        if (strncmp(mAudioHwDevs.valueAt(i)->moduleName(), name, strlen(name)) == 0) {
            ALOGW("loadHwModule() module %s already loaded", name);
            return mAudioHwDevs.keyAt(i);
        }
    }
​
    sp<DeviceHalInterface> dev;
​
    int rc = mDevicesFactoryHal->openDevice(name, &dev);
    if (rc) {
        ALOGE("loadHwModule() error %d loading module %s", rc, name);
        return AUDIO_MODULE_HANDLE_NONE;
    }
​
    mHardwareStatus = AUDIO_HW_INIT;
    rc = dev->initCheck();
    mHardwareStatus = AUDIO_HW_IDLE;
    if (rc) {
        ALOGE("loadHwModule() init check error %d for module %s", rc, name);
        return AUDIO_MODULE_HANDLE_NONE;
    }
​
    // Check and cache this HAL's level of support for master mute and master
    // volume.  If this is the first HAL opened, and it supports the get
    // methods, use the initial values provided by the HAL as the current
    // master mute and volume settings.
​
    AudioHwDevice::Flags flags = static_cast<AudioHwDevice::Flags>(0);
    {  // scope for auto-lock pattern
        AutoMutex lock(mHardwareLock);
​
        if (0 == mAudioHwDevs.size()) {
            mHardwareStatus = AUDIO_HW_GET_MASTER_VOLUME;
            float mv;
            if (OK == dev->getMasterVolume(&mv)) {
                mMasterVolume = mv;
            }
​
            mHardwareStatus = AUDIO_HW_GET_MASTER_MUTE;
            bool mm;
            if (OK == dev->getMasterMute(&mm)) {
                mMasterMute = mm;
            }
        }
​
        mHardwareStatus = AUDIO_HW_SET_MASTER_VOLUME;
        if (OK == dev->setMasterVolume(mMasterVolume)) {
            flags = static_cast<AudioHwDevice::Flags>(flags |
                    AudioHwDevice::AHWD_CAN_SET_MASTER_VOLUME);
        }
​
        mHardwareStatus = AUDIO_HW_SET_MASTER_MUTE;
        if (OK == dev->setMasterMute(mMasterMute)) {
            flags = static_cast<AudioHwDevice::Flags>(flags |
                    AudioHwDevice::AHWD_CAN_SET_MASTER_MUTE);
        }
​
        mHardwareStatus = AUDIO_HW_IDLE;
    }
​
    audio_module_handle_t handle = (audio_module_handle_t) nextUniqueId(AUDIO_UNIQUE_ID_USE_MODULE);
    mAudioHwDevs.add(handle, new AudioHwDevice(handle, name, dev, flags));
​
    ALOGI("loadHwModule() Loaded %s audio interface, handle %d", name, handle);
​
    return handle;
​
}

在该函数中,首先先对mAudioHwDevs集合(装载所有的系统音频对象)和对应的device name进行甄别,mAudioHwDevs用于保存已经load成功的AudioHwDevice,如果name对应的AudioHwDevice在mAudioHwDevs中存在,则不需要重复load,直接返回对应的AudioHwDevice对象;

ini 复制代码
sp<DeviceHalInterface> dev;
​
int rc = mDevicesFactoryHal->openDevice(name, &dev);
if (rc) {
    ALOGE("loadHwModule() error %d loading module %s", rc, name);
    return AUDIO_MODULE_HANDLE_NONE;
}

如果没有,则创建对应的DeviceHalInterface对象,该对象就是HAL硬件设备在Native层的一个代理对象,通过该对象去对指定的device进行操作;

通过调用mDevicesFactoryHal->openDevice(name, &dev)函数,根据传入的name,开启指定的音频硬件设备,其中mDevicesFactoryHal就是我们在讲述AudioFlinger时提及到的根据hardware::audio的版本信息,创建对应的DevicesFactoryHalHybrid对象,用于调用HAL层接口的对象;

所以我们看一下DevicesFactoryHalHybrid中的openDevice()函数:

scss 复制代码
DevicesFactoryHalHybrid::DevicesFactoryHalHybrid()
        : mLocalFactory(new DevicesFactoryHalLocal()),
          mHidlFactory(new DevicesFactoryHalHidl()) {
}
​
DevicesFactoryHalHybrid::~DevicesFactoryHalHybrid() {
}
​
status_t DevicesFactoryHalHybrid::openDevice(const char *name, sp<DeviceHalInterface> *device) {
    if (mHidlFactory != 0 && strcmp(AUDIO_HARDWARE_MODULE_ID_A2DP, name) != 0 &&
        strcmp(AUDIO_HARDWARE_MODULE_ID_HEARING_AID, name) != 0) {
        return mHidlFactory->openDevice(name, device);
    }
    return mLocalFactory->openDevice(name, device);
}

因为我们暂时不考虑其他的外接设备,只考虑设备原生的常见的audio设备,所以会使用到mLocalFactory对象,该对象类型为DevicesFactoryHalLocal;

ini 复制代码
static status_t load_audio_interface(const char *if_name, audio_hw_device_t **dev)
{
    const hw_module_t *mod;
    int rc;
​
    rc = hw_get_module_by_class(AUDIO_HARDWARE_MODULE_ID, if_name, &mod);
    if (rc) {
        ALOGE("%s couldn't load audio hw module %s.%s (%s)", __func__,
                AUDIO_HARDWARE_MODULE_ID, if_name, strerror(-rc));
        goto out;
    }
    rc = audio_hw_device_open(mod, dev);
    if (rc) {
        ALOGE("%s couldn't open audio hw device in %s.%s (%s)", __func__,
                AUDIO_HARDWARE_MODULE_ID, if_name, strerror(-rc));
        goto out;
    }
    if ((*dev)->common.version < AUDIO_DEVICE_API_VERSION_MIN) {
        ALOGE("%s wrong audio hw device version %04x", __func__, (*dev)->common.version);
        rc = BAD_VALUE;
        audio_hw_device_close(*dev);
        goto out;
    }
    return OK;
​
out:
    *dev = NULL;
    return rc;
}
​
status_t DevicesFactoryHalLocal::openDevice(const char *name, sp<DeviceHalInterface> *device) {
    audio_hw_device_t *dev;
    status_t rc = load_audio_interface(name, &dev);
    if (rc == OK) {
        *device = new DeviceHalLocal(dev);
    }
    return rc;
}

在load_audio_interface()函数中,总共分为3部分:

  1. 加载指定的音频硬件抽象层;
  2. 开启上一步加载成功的音频硬件抽象层;
  3. 判断该device版本信息是否满足AUDIO_DEVICE_API_VERSION要求;

执行成功之后,返回对应的audio_hw_device_t类型的对象,然后通过该对象创建对应的DeviceHalLocal对象,而这一块映射的就是Android HAL;

至此,加载audio policy硬件抽象库的逻辑就执行完成了;

打开输出设备

name指定的module加载、开启成功之后,紧接着就是对module中outputDevice和inputDevice的加载、开启过程;

ini 复制代码
for (const auto& outProfile : hwModule->getOutputProfiles()) {
    ..............................
    
    audio_devices_t profileType = outProfile->getSupportedDevicesType();
    if ((profileType & mDefaultOutputDevice->type()) != AUDIO_DEVICE_NONE) {
        profileType = mDefaultOutputDevice->type();
    } else {
        // chose first device present in profile's SupportedDevices also part of
        // outputDeviceTypes
        profileType = outProfile->getSupportedDeviceForType(outputDeviceTypes);
    }
    if ((profileType & outputDeviceTypes) == 0) {
        continue;
    }
    sp<SwAudioOutputDescriptor> outputDesc = new SwAudioOutputDescriptor(outProfile,
                                                                         mpClientInterface);
    const DeviceVector &supportedDevices = outProfile->getSupportedDevices();
    const DeviceVector &devicesForType = supportedDevices.getDevicesFromType(profileType);
    String8 address = devicesForType.size() > 0 ? devicesForType.itemAt(0)->mAddress
        : String8("");
    audio_io_handle_t output = AUDIO_IO_HANDLE_NONE;
    // 打开输出设备
    status_t status = outputDesc->open(nullptr, profileType, address,
                                       AUDIO_STREAM_DEFAULT, AUDIO_OUTPUT_FLAG_NONE, &output);
​
    ........................
}

首先获取hwModule中的outputProfiles,我们先了解一下,OutputProfile是什么;

OutputProfile定义在IOProfile.h文件中,其中还定义了InputProfile与之对应,OutputProfile和InputProfile都继承自IOProfile,IOProfile主要是用于保存关于输出设备和输入设备的关联性的一些配置,例如输出、输入设备所在的module,module中可用的输出、输入设备的集合以及module是否支持Device,即可用输出、输入设备是否为空等的一些设备信息和状态信息等;

获取到所有的SupportedDevices后,首先会对outProfile对象进行有效性判断,例如canOpenNewIo()等,判断完成之后,获取对应的profileType,然后创建SwAudioOutputDescriptor对象,我们看一下SwAudioOutputDescriptor的构造函数:

kotlin 复制代码
class SwAudioOutputDescriptor: public AudioOutputDescriptor
{
public:
    SwAudioOutputDescriptor(const sp<IOProfile>& profile,
                            AudioPolicyClientInterface *clientInterface);

SwAudioOutputDescriptor继承自AudioOutputDescriptor,我们看一下AudioOutputDescriptor的构造函数:

scss 复制代码
AudioOutputDescriptor::AudioOutputDescriptor(const sp<AudioPort>& port,
                                             AudioPolicyClientInterface *clientInterface)
    : mPort(port), mDevice(AUDIO_DEVICE_NONE),
      mClientInterface(clientInterface), mPatchHandle(AUDIO_PATCH_HANDLE_NONE), mId(0)
{
    // clear usage count for all stream types
    for (int i = 0; i < AUDIO_STREAM_CNT; i++) {
        mRefCount[i] = 0;
        mCurVolume[i] = -1.0;
        mMuteCount[i] = 0;
        mStopTime[i] = 0;
    }
    for (int i = 0; i < NUM_STRATEGIES; i++) {
        mStrategyMutedByDevice[i] = false;
    }
    if (mPort.get() != nullptr) {
        mPort->pickAudioProfile(mSamplingRate, mChannelMask, mFormat);
        if (mPort->mGains.size() > 0) {
            mPort->mGains[0]->getDefaultConfig(&mGain);
        }
    }
}

AudioOutputDescriptor对象创建成功之后,然后就获取outputProfile中保存的Output DeviceDescriptor,然后获取到该DeviceDescriptor的address,最后调用了outputDesc->open()函数:

c 复制代码
status_t SwAudioOutputDescriptor::open(const audio_config_t *config,
                                       audio_devices_t device,
                                       const String8& address,
                                       audio_stream_type_t stream,
                                       audio_output_flags_t flags,
                                       audio_io_handle_t *output)
{
    audio_config_t lConfig;
    ..............................
​
    mFlags = (audio_output_flags_t)(mFlags | flags);
​
    ALOGV("opening output for device %08x address %s profile %p name %s",
          mDevice, address.string(), mProfile.get(), mProfile->getName().string());
​
    status_t status = mClientInterface->openOutput(mProfile->getModuleHandle(),
                                                   output,
                                                   &lConfig,
                                                   &mDevice,
                                                   address,
                                                   &mLatency,
                                                   mFlags);
    ........................
​
    return status;
}

其中调用了mClientInterface的openOutput()函数:

arduino 复制代码
status_t AudioPolicyService::AudioPolicyClient::openOutput(audio_module_handle_t module,
                                                           audio_io_handle_t *output,
                                                           audio_config_t *config,
                                                           audio_devices_t *devices,
                                                           const String8& address,
                                                           uint32_t *latencyMs,
                                                           audio_output_flags_t flags)
{
    sp<IAudioFlinger> af = AudioSystem::get_audio_flinger();
    if (af == 0) {
        ALOGW("%s: could not get AudioFlinger", __func__);
        return PERMISSION_DENIED;
    }
    return af->openOutput(module, output, config, devices, address, latencyMs, flags);
}
arduino 复制代码
status_t AudioFlinger::openOutput(audio_module_handle_t module,
                                  audio_io_handle_t *output,
                                  audio_config_t *config,
                                  audio_devices_t *devices,
                                  const String8& address,
                                  uint32_t *latencyMs,
                                  audio_output_flags_t flags)
{
    ALOGI("openOutput() this %p, module %d Device %#x, SamplingRate %d, Format %#08x, "
              "Channels %#x, flags %#x",
              this, module,
              (devices != NULL) ? *devices : 0,
              config->sample_rate,
              config->format,
              config->channel_mask,
              flags);
​
    if (devices == NULL || *devices == AUDIO_DEVICE_NONE) {
        return BAD_VALUE;
    }
​
    Mutex::Autolock _l(mLock);
​
    sp<ThreadBase> thread = openOutput_l(module, output, config, *devices, address, flags);
    ..............................
​
    return NO_INIT;
}
arduino 复制代码
sp<AudioFlinger::ThreadBase> AudioFlinger::openOutput_l(audio_module_handle_t module,
                                                            audio_io_handle_t *output,
                                                            audio_config_t *config,
                                                            audio_devices_t devices,
                                                            const String8& address,
                                                            audio_output_flags_t flags)
{
    // findSuitableHwDev_l()函数中获取的AudioHwDevice对象就是我们在之前分析过程中,在加载audio policy过程中加载的module设备
    AudioHwDevice *outHwDev = findSuitableHwDev_l(module, devices);
    if (outHwDev == NULL) {
        return 0;
    }
​
    if (*output == AUDIO_IO_HANDLE_NONE) {
        *output = nextUniqueId(AUDIO_UNIQUE_ID_USE_OUTPUT);
    } else {
        // Audio Policy does not currently request a specific output handle.
        // If this is ever needed, see openInput_l() for example code.
        ALOGE("openOutput_l requested output handle %d is not AUDIO_IO_HANDLE_NONE", *output);
        return 0;
    }
​
    mHardwareStatus = AUDIO_HW_OUTPUT_OPEN;
​
    // FOR TESTING ONLY:
    // This if statement allows overriding the audio policy settings
    // and forcing a specific format or channel mask to the HAL/Sink device for testing.
    if (!(flags & (AUDIO_OUTPUT_FLAG_COMPRESS_OFFLOAD | AUDIO_OUTPUT_FLAG_DIRECT))) {
        ........................
    }
​
    AudioStreamOut *outputStream = NULL;
    status_t status = outHwDev->openOutputStream(
            &outputStream,
            *output,
            devices,
            flags,
            config,
            address.string());
​
    mHardwareStatus = AUDIO_HW_IDLE;
​
    if (status == NO_ERROR) {
        if (flags & AUDIO_OUTPUT_FLAG_MMAP_NOIRQ) {
            sp<MmapPlaybackThread> thread =
                    new MmapPlaybackThread(this, *output, outHwDev, outputStream,
                                          devices, AUDIO_DEVICE_NONE, mSystemReady);
            mMmapThreads.add(*output, thread);
            ALOGV("openOutput_l() created mmap playback thread: ID %d thread %p",
                  *output, thread.get());
            return thread;
        } else {
            sp<PlaybackThread> thread;
            if (flags & AUDIO_OUTPUT_FLAG_COMPRESS_OFFLOAD) {
                thread = new OffloadThread(this, outputStream, *output, devices, mSystemReady);
                ALOGV("openOutput_l() created offload output: ID %d thread %p",
                      *output, thread.get());
            } else if ((flags & AUDIO_OUTPUT_FLAG_DIRECT)
                    || !isValidPcmSinkFormat(config->format)
                    || !isValidPcmSinkChannelMask(config->channel_mask)) {
                thread = new DirectOutputThread(this, outputStream, *output, devices, mSystemReady);
                ALOGV("openOutput_l() created direct output: ID %d thread %p",
                      *output, thread.get());
            } else {
                thread = new MixerThread(this, outputStream, *output, devices, mSystemReady);
                ALOGV("openOutput_l() created mixer output: ID %d thread %p",
                      *output, thread.get());
            }
            mPlaybackThreads.add(*output, thread);
            return thread;
        }
    }
​
    return 0;
}

首先先获取到之前存储的AudioHwDevice对象,然后调用了AudioHwDevice的openOutputStream()函数创建了AudioStreamOut对象,对应HAL层就是audio_stream_out,并创建了PlaybackThread播放线程;

/frameworks/av/services/audioflinger/AudioHwDevice.cpp

arduino 复制代码
status_t AudioHwDevice::openOutputStream(
        AudioStreamOut **ppStreamOut,
        audio_io_handle_t handle,
        audio_devices_t devices,
        audio_output_flags_t flags,
        struct audio_config *config,
        const char *address)
{
​
    struct audio_config originalConfig = *config;
    AudioStreamOut *outputStream = new AudioStreamOut(this, flags);
​
    // Try to open the HAL first using the current format.
    ALOGV("openOutputStream(), try "
            " sampleRate %d, Format %#x, "
            "channelMask %#x",
            config->sample_rate,
            config->format,
            config->channel_mask);
    status_t status = outputStream->open(handle, devices, config, address);
​
    if (status != NO_ERROR) {
        ..............................
    }
​
    *ppStreamOut = outputStream;
    return status;
}

在其中创建了AudioStreamOut对象,这个变量没什么特别的,他把audio_hw_device_t和audio_stream_out_t

/frameworks/av/services/audioflinger/AudioStreamOut.cpp

arduino 复制代码
status_t AudioStreamOut::open(
        audio_io_handle_t handle,
        audio_devices_t devices,
        struct audio_config *config,
        const char *address)
{
    sp<StreamOutHalInterface> outStream;
​
    audio_output_flags_t customFlags = (config->format == AUDIO_FORMAT_IEC61937)
                ? (audio_output_flags_t)(flags | AUDIO_OUTPUT_FLAG_IEC958_NONAUDIO)
                : flags;
​
    int status = hwDev()->openOutputStream(
            handle,
            devices,
            customFlags,
            config,
            address,
            &outStream);
    ALOGV("AudioStreamOut::open(), HAL returned "
            " stream %p, sampleRate %d, Format %#x, "
            "channelMask %#x, status %d",
            outStream.get(),
            config->sample_rate,
            config->format,
            config->channel_mask,
            status);
​
    // Some HALs may not recognize AUDIO_FORMAT_IEC61937. But if we declare
    // it as PCM then it will probably work.
    if (status != NO_ERROR && config->format == AUDIO_FORMAT_IEC61937) {
        struct audio_config customConfig = *config;
        customConfig.format = AUDIO_FORMAT_PCM_16_BIT;
​
        status = hwDev()->openOutputStream(
                handle,
                devices,
                customFlags,
                &customConfig,
                address,
                &outStream);
        ALOGV("AudioStreamOut::open(), treat IEC61937 as PCM, status = %d", status);
    }
​
    if (status == NO_ERROR) {
        stream = outStream;
        mHalFormatHasProportionalFrames = audio_has_proportional_frames(config->format);
        status = stream->getFrameSize(&mHalFrameSize);
    }
​
    return status;
}

这个函数中主要是开启了HAL层对应的AudioStream Out;

保存输出设备描述符对象

AudioHwDevice设备开启之后,即对应的AudioStreamOut开启之后,紧接着就是保存输出设备的描述符:

ini 复制代码
sp<SwAudioOutputDescriptor> outputDesc = new SwAudioOutputDescriptor(outProfile,
                                                                                 mpClientInterface);
........................
audio_io_handle_t output = AUDIO_IO_HANDLE_NONE;
// 打开输出设备
status_t status = outputDesc->open(nullptr, profileType, address,
                                   AUDIO_STREAM_DEFAULT, AUDIO_OUTPUT_FLAG_NONE, &output);
​
........................
​
    // 保存输出设备描述符对象
    addOutput(output, outputDesc);

在开启输出设备的时候,创建了对应的audio_io_handle_t类型的output对象和SwAudioOutputDescriptor类型的outputDesc对象;

然后就调用了addOutput()函数用于保存输出设备的描述符;

scss 复制代码
void AudioPolicyManager::addOutput(audio_io_handle_t output,
                                   const sp<SwAudioOutputDescriptor>& outputDesc)
{
    mOutputs.add(output, outputDesc);
    applyStreamVolumes(outputDesc, AUDIO_DEVICE_NONE, 0 /* delayMs */, true /* force */);
    updateMono(output); // update mono status when adding to output list
    selectOutputForMusicEffects();
    nextAudioPortGeneration();
}

首先先将output以及对应的outputDesc绑定储存到mOutputs中,mOutputs对象的类型为SwAudioOutputCollection;

然后调用applyStreamVolumes()函数:

arduino 复制代码
void AudioPolicyManager::applyStreamVolumes(const sp<AudioOutputDescriptor>& outputDesc,
                                                audio_devices_t device,
                                                int delayMs,
                                                bool force)
{
    ALOGVV("applyStreamVolumes() for device %08x", device);
​
    for (int stream = 0; stream < AUDIO_STREAM_FOR_POLICY_CNT; stream++) {
        checkAndSetVolume((audio_stream_type_t)stream,
                          mVolumeCurves->getVolumeIndex((audio_stream_type_t)stream, device),
                          outputDesc,
                          device,
                          delayMs,
                          force);
    }
}

遍历所有的stream类型,对所有的stream进行volume设置;

scss 复制代码
status_t AudioPolicyManager::checkAndSetVolume(audio_stream_type_t stream,
                                                   int index,
                                                   const sp<AudioOutputDescriptor>& outputDesc,
                                                   audio_devices_t device,
                                                   int delayMs,
                                                   bool force)
{
    // do not change actual stream volume if the stream is muted
    // 静音状态
    if (outputDesc->mMuteCount[stream] != 0) {
        ALOGVV("checkAndSetVolume() stream %d muted count %d",
              stream, outputDesc->mMuteCount[stream]);
        return NO_ERROR;
    }
    audio_policy_forced_cfg_t forceUseForComm =
            mEngine->getForceUse(AUDIO_POLICY_FORCE_FOR_COMMUNICATION);
    // do not change in call volume if bluetooth is connected and vice versa
    if ((stream == AUDIO_STREAM_VOICE_CALL && forceUseForComm == AUDIO_POLICY_FORCE_BT_SCO) ||
        (stream == AUDIO_STREAM_BLUETOOTH_SCO && forceUseForComm != AUDIO_POLICY_FORCE_BT_SCO)) {
        ALOGV("checkAndSetVolume() cannot set stream %d volume with force use = %d for comm",
             stream, forceUseForComm);
        return INVALID_OPERATION;
    }
​
    // 重新对device进行赋值
    if (device == AUDIO_DEVICE_NONE) {
        device = outputDesc->device();
    }
​
    // 获取需要调节音量的分贝值
    // 这里主要是调用volIndexToDb完成的,index转换为db,主要是依据音量曲线,index为UI上设置的等级
    float volumeDb = computeVolume(stream, index, device);
    if (outputDesc->isFixedVolume(device) ||
            // Force VoIP volume to max for bluetooth SCO
            ((stream == AUDIO_STREAM_VOICE_CALL || stream == AUDIO_STREAM_BLUETOOTH_SCO) &&
             (device & AUDIO_DEVICE_OUT_ALL_SCO) != 0)) {
        volumeDb = 0.0f;
    }
​
    // 把音量传到AudioFlinger
    // AudioFlinger是音量调节的执行者,AudioPolicy是决策者,AudioOutputDescriptor在SwAudioOutputDescriptor类中AudioOutputDescriptor.cpp
    outputDesc->setVolume(volumeDb, stream, device, delayMs, force);
​
    if (stream == AUDIO_STREAM_VOICE_CALL ||
        stream == AUDIO_STREAM_BLUETOOTH_SCO) {
        float voiceVolume;
        // Force voice volume to max for bluetooth SCO as volume is managed by the headset
        if (stream == AUDIO_STREAM_VOICE_CALL) {
            // 计算出Voice流的音量
            voiceVolume = (float)index/(float)mVolumeCurves->getVolumeIndexMax(stream);
        } else {
            // 对于AUDIO_STREAM_BLUETOOTH_SCO流,蓝牙侧会调节音量,所以这里会使用最大音量值
            voiceVolume = 1.0;
        }
​
        if (voiceVolume != mLastVoiceVolume) {
            // 直接调用AudioFlinger的接口setVoiceVolume
            mpClientInterface->setVoiceVolume(voiceVolume, delayMs);
            mLastVoiceVolume = voiceVolume;
        }
    }
​
    return NO_ERROR;
}

applyStreamVolumes()函数执行完成之后,紧接着是updateMono(output):

scss 复制代码
void updateMono(audio_io_handle_t output) {
    AudioParameter param;
    param.addInt(String8(AudioParameter::keyMonoOutput), (int)mMasterMono);
    mpClientInterface->setParameters(output, param.toString());
}

在该函数中,创建了对应stream的AudioParameter对象,mMasterMono为AudioPolicyManager.h中定义的变量,在AudioPolicyManager::setMasterMono()函数中可以修改,将mMasterMono的值赋值给了AudioParameter::keyMonoOutput变量,然后调用mpClientInterface->setParameters()函数,将output以及AudioParameter对象传入到了HAL层中,供HAL层使用;

updateMono()函数之后就是selectOutputForMusicEffects()函数:

ini 复制代码
audio_io_handle_t AudioPolicyManager::selectOutputForMusicEffects()
{
    // select one output among several suitable for global effects.
    // The priority is as follows:
    // 1: An offloaded output. If the effect ends up not being offloadable,
    //    AudioFlinger will invalidate the track and the offloaded output
    //    will be closed causing the effect to be moved to a PCM output.
    // 2: A deep buffer output
    // 3: The primary output
    // 4: the first output in the list
​
    routing_strategy strategy = getStrategy(AUDIO_STREAM_MUSIC);
    audio_devices_t device = getDeviceForStrategy(strategy, false /*fromCache*/);
    SortedVector<audio_io_handle_t> outputs = getOutputsForDevice(device, mOutputs);
​
    if (outputs.size() == 0) {
        return AUDIO_IO_HANDLE_NONE;
    }
​
    audio_io_handle_t output = AUDIO_IO_HANDLE_NONE;
    bool activeOnly = true;
​
    while (output == AUDIO_IO_HANDLE_NONE) {
        audio_io_handle_t outputOffloaded = AUDIO_IO_HANDLE_NONE;
        audio_io_handle_t outputDeepBuffer = AUDIO_IO_HANDLE_NONE;
        audio_io_handle_t outputPrimary = AUDIO_IO_HANDLE_NONE;
​
        for (audio_io_handle_t output : outputs) {
            sp<SwAudioOutputDescriptor> desc = mOutputs.valueFor(output);
            if (activeOnly && !desc->isStreamActive(AUDIO_STREAM_MUSIC)) {
                continue;
            }
            ALOGV("selectOutputForMusicEffects activeOnly %d output %d flags 0x%08x",
                  activeOnly, output, desc->mFlags);
            if ((desc->mFlags & AUDIO_OUTPUT_FLAG_COMPRESS_OFFLOAD) != 0) {
                outputOffloaded = output;
            }
            if ((desc->mFlags & AUDIO_OUTPUT_FLAG_DEEP_BUFFER) != 0) {
                outputDeepBuffer = output;
            }
            if ((desc->mFlags & AUDIO_OUTPUT_FLAG_PRIMARY) != 0) {
                outputPrimary = output;
            }
        }
        // 这一块体现了输出设备的选择规则(输出设备的优先级选择)
        if (outputOffloaded != AUDIO_IO_HANDLE_NONE) {
            output = outputOffloaded;
        } else if (outputDeepBuffer != AUDIO_IO_HANDLE_NONE) {
            output = outputDeepBuffer;
        } else if (outputPrimary != AUDIO_IO_HANDLE_NONE) {
            output = outputPrimary;
        } else {
            output = outputs[0];
        }
        activeOnly = false;
    }
​
    if (output != mMusicEffectOutput) {
        mpClientInterface->moveEffects(AUDIO_SESSION_OUTPUT_MIX, mMusicEffectOutput, output);
        mMusicEffectOutput = output;
    }
​
    ALOGV("selectOutputForMusicEffects selected output %d", output);
    return output;
}

在几个提供一个特定设备或一组路径的路径中选择一个输出

设备(该列表以前由getOutputsForDevice()构建)。优先级如下:

  1. 请求的policy flags数量最多的输出
  2. bit depth最接近请求的输出
  3. 主输出
  4. 列表中的第一个输出

根据上述的规则,选择合适的输出,然后返回选中的output即可;

selectOutputForMusicEffects()函数完成之后,就是保存输出设备描述符阶段的最后一个逻辑nextAudioPortGeneration():

arduino 复制代码
uint32_t AudioPolicyManager::nextAudioPortGeneration()
{
    return mAudioPortGeneration++;
}

用于统计音频端口个数,可以简单的理解为stream的个数;

设置输出设备

输出设备保存阶段完成之后,就是设置输出设备;

scss 复制代码
// 保存输出设备描述符对象
addOutput(output, outputDesc);
// 设置输出设备
setOutputDevice(outputDesc,
                profileType,
                true,
                0,
                NULL,
                address);

在setOutputDevice()函数中,将outputDesc传入:

ini 复制代码
uint32_t AudioPolicyManager::setOutputDevice(const sp<AudioOutputDescriptor>& outputDesc,
                                             audio_devices_t device,
                                             bool force,
                                             int delayMs,
                                             audio_patch_handle_t *patchHandle,
                                             const char *address,
                                             bool requiresMuteCheck)
{
    ALOGV("setOutputDevice() device %04x delayMs %d", device, delayMs);
    AudioParameter param;
    uint32_t muteWaitMs;
​
    if (outputDesc->isDuplicated()) {
        muteWaitMs = setOutputDevice(outputDesc->subOutput1(), device, force, delayMs,
                nullptr /* patchHandle */, nullptr /* address */, requiresMuteCheck);
        muteWaitMs += setOutputDevice(outputDesc->subOutput2(), device, force, delayMs,
                nullptr /* patchHandle */, nullptr /* address */, requiresMuteCheck);
        return muteWaitMs;
    }
    // no need to proceed if new device is not AUDIO_DEVICE_NONE and not supported by current
    // output profile
    if ((device != AUDIO_DEVICE_NONE) &&
            ((device & outputDesc->supportedDevices()) == AUDIO_DEVICE_NONE)) {
        return 0;
    }
​
    // filter devices according to output selected
    device = (audio_devices_t)(device & outputDesc->supportedDevices());
​
    audio_devices_t prevDevice = outputDesc->mDevice;
​
    ALOGV("setOutputDevice() prevDevice 0x%04x", prevDevice);
​
    // 会将音频路径的默认设备输出改变
    if (device != AUDIO_DEVICE_NONE) {
        outputDesc->mDevice = device;
    }
​
    // if the outputs are not materially active, there is no need to mute.
    // 如果输出不是实质性的活动,就不需要静音
    if (requiresMuteCheck) {
        muteWaitMs = checkDeviceMuteStrategies(outputDesc, prevDevice, delayMs);
    } else {
        ALOGV("%s: suppressing checkDeviceMuteStrategies", __func__);
        muteWaitMs = 0;
    }
​
    // Do not change the routing if:
    //      the requested device is AUDIO_DEVICE_NONE
    //      OR the requested device is the same as current device
    //  AND force is not specified
    //  AND the output is connected by a valid audio patch.
    // Doing this check here allows the caller to call setOutputDevice() without conditions
    if ((device == AUDIO_DEVICE_NONE || device == prevDevice) &&
        !force &&
        outputDesc->getPatchHandle() != 0) {
        ALOGV("setOutputDevice() setting same device 0x%04x or null device", device);
        return muteWaitMs;
    }
​
    ALOGV("setOutputDevice() changing device");
​
    // do the routing
    if (device == AUDIO_DEVICE_NONE) {
        resetOutputDevice(outputDesc, delayMs, NULL);
    } else {
        DeviceVector deviceList;
        if ((address == NULL) || (strlen(address) == 0)) {
            deviceList = mAvailableOutputDevices.getDevicesFromType(device);
        } else {
            deviceList = mAvailableOutputDevices.getDevicesFromTypeAddr(device, String8(address));
        }
​
        if (!deviceList.isEmpty()) {
            struct audio_patch patch;
            outputDesc->toAudioPortConfig(&patch.sources[0]);
            patch.num_sources = 1;
            patch.num_sinks = 0;
            for (size_t i = 0; i < deviceList.size() && i < AUDIO_PATCH_PORTS_MAX; i++) {
                deviceList.itemAt(i)->toAudioPortConfig(&patch.sinks[i]);
                patch.num_sinks++;
            }
            ssize_t index;
            if (patchHandle && *patchHandle != AUDIO_PATCH_HANDLE_NONE) {
                index = mAudioPatches.indexOfKey(*patchHandle);
            } else {
                index = mAudioPatches.indexOfKey(outputDesc->getPatchHandle());
            }
            sp< AudioPatch> patchDesc;
            audio_patch_handle_t afPatchHandle = AUDIO_PATCH_HANDLE_NONE;
            if (index >= 0) {
                patchDesc = mAudioPatches.valueAt(index);
                afPatchHandle = patchDesc->mAfPatchHandle;
            }
​
            status_t status = mpClientInterface->createAudioPatch(&patch,
                                                                   &afPatchHandle,
                                                                   delayMs);
            ALOGV("setOutputDevice() createAudioPatch returned %d patchHandle %d"
                    "num_sources %d num_sinks %d",
                                       status, afPatchHandle, patch.num_sources, patch.num_sinks);
            if (status == NO_ERROR) {
                if (index < 0) {
                    patchDesc = new AudioPatch(&patch, mUidCached);
                    addAudioPatch(patchDesc->mHandle, patchDesc);
                } else {
                    patchDesc->mPatch = patch;
                }
                patchDesc->mAfPatchHandle = afPatchHandle;
                if (patchHandle) {
                    *patchHandle = patchDesc->mHandle;
                }
                outputDesc->setPatchHandle(patchDesc->mHandle);
                nextAudioPortGeneration();
                mpClientInterface->onAudioPatchListUpdate();
            }
        }
​
        // inform all input as well
        for (size_t i = 0; i < mInputs.size(); i++) {
            const sp<AudioInputDescriptor>  inputDescriptor = mInputs.valueAt(i);
            if (!is_virtual_input_device(inputDescriptor->mDevice)) {
                AudioParameter inputCmd = AudioParameter();
                ALOGV("%s: inform input %d of device:%d", __func__,
                      inputDescriptor->mIoHandle, device);
                inputCmd.addInt(String8(AudioParameter::keyRouting),device);
                mpClientInterface->setParameters(inputDescriptor->mIoHandle,
                                                 inputCmd.toString(),
                                                 delayMs);
            }
        }
    }
​
    // update stream volumes according to new device
    applyStreamVolumes(outputDesc, device, delayMs);
​
    return muteWaitMs;
}

在该函数中,主要是对audio的静音状态、端口设置、并调用mpClientInterface->createAudioPatch()函数创建对应的AudioPatch,其核心逻辑就是createAudioPatch函数;

打开输入设备

outputDevice描述完,紧接着是inputDevice的定义:

arduino 复制代码
// 打开输入设备
status_t status = inputDesc->open(nullptr,
                                  profileType,
                                  address,
                                  AUDIO_SOURCE_MIC,
                                  AUDIO_INPUT_FLAG_NONE,
                                  &input);

和输出设备的open流程类似:

ini 复制代码
status_t AudioInputDescriptor::open(const audio_config_t *config,
                                       audio_devices_t device,
                                       const String8& address,
                                       audio_source_t source,
                                       audio_input_flags_t flags,
                                       audio_io_handle_t *input)
{
    audio_config_t lConfig;
    if (config == nullptr) {
        lConfig = AUDIO_CONFIG_INITIALIZER;
        lConfig.sample_rate = mSamplingRate;
        lConfig.channel_mask = mChannelMask;
        lConfig.format = mFormat;
    } else {
        lConfig = *config;
    }
​
    mDevice = device;
​
    ALOGV("opening input for device %08x address %s profile %p name %s",
          mDevice, address.string(), mProfile.get(), mProfile->getName().string());
​
    status_t status = mClientInterface->openInput(mProfile->getModuleHandle(),
                                                  input,
                                                  &lConfig,
                                                  &mDevice,
                                                  address,
                                                  source,
                                                  flags);
    LOG_ALWAYS_FATAL_IF(mDevice != device,
                        "%s openInput returned device %08x when given device %08x",
                        __FUNCTION__, mDevice, device);
​
    if (status == NO_ERROR) {
        LOG_ALWAYS_FATAL_IF(*input == AUDIO_IO_HANDLE_NONE,
                            "%s openInput returned input handle %d for device %08x",
                            __FUNCTION__, *input, device);
        mSamplingRate = lConfig.sample_rate;
        mChannelMask = lConfig.channel_mask;
        mFormat = lConfig.format;
        mId = AudioPort::getNextUniqueId();
        mIoHandle = *input;
        mProfile->curOpenCount++;
    }
​
    return status;
}

同样也是调用了mClientInterface的openInput()函数,用于开启HAL层指定的音频输入设备;

更新输出设备

输出和输入设备都开启之后,最后执行更新输出设备逻辑:

css 复制代码
void AudioPolicyManager::updateDevicesAndOutputs()
{
    for (int i = 0; i < NUM_STRATEGIES; i++) {
        mDeviceForStrategy[i] = getDeviceForStrategy((routing_strategy)i, false /*fromCache*/);
    }
    mPreviousOutputs = mOutputs;
}

AudioPolicyEffects音效初始化

AudioPolicyManager对象创建成功之后,紧接着就是AudioPolicyEffects对象。

scss 复制代码
AudioPolicyEffects::AudioPolicyEffects()
{
    status_t loadResult = loadAudioEffectXmlConfig();
    if (loadResult < 0) {
        ALOGW("Failed to load XML effect configuration, fallback to .conf");
        // load automatic audio effect modules
        if (access(AUDIO_EFFECT_VENDOR_CONFIG_FILE, R_OK) == 0) {
            loadAudioEffectConfig(AUDIO_EFFECT_VENDOR_CONFIG_FILE);
        } else if (access(AUDIO_EFFECT_DEFAULT_CONFIG_FILE, R_OK) == 0) {
            loadAudioEffectConfig(AUDIO_EFFECT_DEFAULT_CONFIG_FILE);
        }
    } else if (loadResult > 0) {
        ALOGE("Effect config is partially invalid, skipped %d elements", loadResult);
    }
}

在AudioPolicyEffects的构造函数中,首先会加载xml中的config,如果加载失败,会尝试加载.conf文件中的配置,通过判断指定的文件是否有效,来选择加载哪一个conf文件;

AudioPolicyEffects对象创建成功之后,会赋值给AudioPolicyService中的mAudioPolicyEffects,用于保存新创建好的对象;

SoundTriggerHwService

同理,SoundTriggerHwService.instantiate()函数同样也是继承自BinderService,将SoundTriggerHwService添加到ServiceManager中;

scss 复制代码
SoundTriggerHwService::SoundTriggerHwService()
    : BnSoundTriggerHwService(),
      mNextUniqueId(1),
      mMemoryDealer(new MemoryDealer(1024 * 1024, "SoundTriggerHwService")),
      mCaptureState(false)
{
}
​
void SoundTriggerHwService::onFirstRef()
{
    int rc;
​
    sp<SoundTriggerHalInterface> halInterface =
            SoundTriggerHalInterface::connectModule(HW_MODULE_PREFIX);
​
    if (halInterface == 0) {
        ALOGW("could not connect to HAL");
        return;
    }
    sound_trigger_module_descriptor descriptor;
    rc = halInterface->getProperties(&descriptor.properties);
    if (rc != 0) {
        ALOGE("could not read implementation properties");
        return;
    }
    descriptor.handle =
            (sound_trigger_module_handle_t)android_atomic_inc(&mNextUniqueId);
    ALOGI("loaded default module %s, handle %d", descriptor.properties.description,
                                                 descriptor.handle);
​
    sp<Module> module = new Module(this, halInterface, descriptor);
    mModules.add(descriptor.handle, module);
    mCallbackThread = new CallbackThread(this);
}
相关推荐
666xiaoniuzi2 小时前
深入理解 C 语言中的内存操作函数:memcpy、memmove、memset 和 memcmp
android·c语言·数据库
winkee2 小时前
在 git commit 中使用 gpg key 进行签名
架构·前端框架·代码规范
Dylanioucn2 小时前
【分布式微服务云原生】掌握 Redis Cluster架构解析、动态扩展原理以及哈希槽分片算法
算法·云原生·架构
黄俊懿4 小时前
【深入理解SpringCloud微服务】手写实现各种限流算法——固定时间窗、滑动时间窗、令牌桶算法、漏桶算法
java·后端·算法·spring cloud·微服务·架构
车载诊断技术6 小时前
什么是汽车中的SDK?
网络·架构·汽车·soa·电子电器架构
沐言人生6 小时前
Android10 Framework—Init进程-8.服务端属性文件创建和mmap映射
android
沐言人生7 小时前
Android10 Framework—Init进程-9.服务端属性值初始化
android·android studio·android jetpack
沐言人生7 小时前
Android10 Framework—Init进程-7.服务端属性安全上下文序列化
android·android studio·android jetpack
追光天使7 小时前
【Mac】和【安卓手机】 通过有线方式实现投屏
android·macos·智能手机·投屏·有线
小雨cc5566ru7 小时前
uniapp+Android智慧居家养老服务平台 0fjae微信小程序
android·微信小程序·uni-app