Android AudioFlinger(四)—— 揭开PlaybackThread面纱

前言:

继上一篇Android AudioFlinger(三)------ AndroidAudio Flinger 之设备管理我们知道PlaybackThread继承自Re'fBase, 在被第一次引用的时候就会调用onFirstRef,实现如下:

cpp 复制代码
void AudioFlinger::PlaybackThread::onFirstRef()
{
    run(mThreadName, ANDROID_PRIORITY_URGENT_AUDIO);
}

很简单就调用了一个run方法去开起了一个ThreadLoop线程:

cpp 复制代码
bool AudioFlinger::PlaybackThread::threadLoop()
{
...
}

接下来我们进一步研究下PlaybackThread的循环主题具体做了什么?

揭开PlaybackThread面纱

当进入到threadloop就说明playbackthread的音频事务正式开启了。代码比较多,但是我们如果仔细看的话会发现关键代码就几处,而且都是threadLoop_前缀的,threadLoop_standby\threadLoop_mix\threadLoop_sleepTime\threadLoop_write等,这也代表这些函数都是threadLoop内部调用的。

cpp 复制代码
bool AudioFlinger::PlaybackThread::threadLoop()
{
...
    while (!exitPending())
    {
  ...
        { // scope for mLock
            //这个地方框起来主要就是限制自动锁_l的生命周期,
            Mutex::Autolock _l(mLock);
            //处理config事件
            processConfigEvents_l();            
            if ((!mActiveTracks.size() && systemTime() > mStandbyTimeNs) ||
                                   isSuspended()) {
                if (shouldStandby_l()) {
                    //进入standby状态节省能耗
                    threadLoop_standby();
                }
            }
            //准备音频流
            mMixerStatus = prepareTracks_l(&tracksToRemove);
        } // mLock scope ends
    ...
        if (mBytesRemaining == 0) {
            mCurrentWriteLength = 0;
            if (mMixerStatus == MIXER_TRACKS_READY) {
                //读取所有active设备数据,混音器开始混音
                threadLoop_mix();
            } else if ((mMixerStatus != MIXER_DRAIN_TRACK)
                        && (mMixerStatus != MIXER_DRAIN_ALL)) {
                //进入休眠
                threadLoop_sleepTime();
            }
        }
    ...
        if (!waitingAsyncCallback()) {
            if (mSleepTimeUs == 0) {
                if (mBytesRemaining) {
                    //把混音器处理好的数据写入到输出流设备
                    ret = threadLoop_write();
                } else if ((mMixerStatus == MIXER_DRAIN_TRACK) ||
                        (mMixerStatus == MIXER_DRAIN_ALL)) {
                    threadLoop_drain();
                }
             ...
            }
        }
... 
        //移除相关的track
        threadLoop_removeTracks(tracksToRemove);
        tracksToRemove.clear();
        clearOutputTracks();
        effectChains.clear();
    }
    threadLoop_exit();
    if (!mStandby) {
        threadLoop_standby();
        mStandby = true;
    }
    releaseWakeLock();
    return false;
}

首先exitPending是threadloop循环的条件,这个函数是Thread的内部函数,它主要就是通过判断mExitPending来决定是否退出线程,这个值默认为false,在收到requestExit或者requestExitAndWait的时候会变为true,然后就会退出循环。

Thread PATH:/system/core/libutils/Threads.cpp

processConfigEvents_l: 处理config时间,当有配置发声变化的时候会调用sendConfigEvent_l来把事件添加到mConfigEvents中,最终processConfigEvents_l检测到就会去处理对应的配置。

threadLoop_standby: 判断当前是否符合standby条件,符合就调用threadLoop_standby,最终的实现其实是hal层实现,会做出关闭音频流等操作。

prepareTracks_l: 这个函数非常复杂,我们简单概括下,挑几个重点谈一谈

cpp 复制代码
// prepareTracks_l() must be called with ThreadBase::mLock held
AudioFlinger::PlaybackThread::mixer_state AudioFlinger::MixerThread::prepareTracks_l(
        Vector< sp<Track> > *tracksToRemove)
{
  //获取当前活跃的track数量
    size_t count = mActiveTracks.size();
    for (size_t i=0 ; i<count ; i++) {
    //循环每个活跃的track
        const sp<Track> t = mActiveTracks[i];
        // this const just means the local variable doesn't change
        Track* const track = t.get();
        // process fast tracks
        if (track->isFastTrack()) {
      //如果是fasttrack改如何处理
        }
...
        {   // local variable scope to avoid goto warning
    //数据块准备操作
        audio_track_cblk_t* cblk = track->cblk();
    //获取track的音频信息
        const uint32_t sampleRate = track->mAudioTrackServerProxy->getSampleRate();
        AudioPlaybackRate playbackRate = track->mAudioTrackServerProxy->getPlaybackRate();

        desiredFrames = sourceFramesNeededWithTimestretch(
                sampleRate, mNormalFrameCount, mSampleRate, playbackRate.mSpeed);

        desiredFrames += mAudioMixer->getUnreleasedFrames(track->name());

        uint32_t minFrames = 1;
        if ((track->sharedBuffer() == 0) && !track->isStopped() && !track->isPausing() &&
                (mMixerStatusIgnoringFastTracks == MIXER_TRACKS_READY)) {
      //至少需要准备的音频帧数
            minFrames = desiredFrames;
        }
        size_t framesReady = track->framesReady();
        if ((framesReady >= minFrames) && track->isReady() &&
                !track->isPaused() && !track->isTerminated())
        {
            mixedTracks++;
            // compute volume for this track
            uint32_t vl, vr;       // in U8.24 integer format
            float vlf, vrf, vaf;   // in [0.0, 1.0] float format//左声道,右声道,aux level音量
            // read original volumes with volume control
            float typeVolume = mStreamTypes[track->streamType()].volume;//获取每个stream类型的音频音量
            float v = masterVolume * typeVolume;//主音量和类型音量相乘

            if (track->isPausing() || mStreamTypes[track->streamType()].mute) {
                vl = vr = 0;
                vlf = vrf = vaf = 0.;//设置0,代表静音操作
                if (track->isPausing()) {
                    track->setPaused();//track设置暂停
                }
            } else {
                sp<AudioTrackServerProxy> proxy = track->mAudioTrackServerProxy;
                gain_minifloat_packed_t vlr = proxy->getVolumeLR();//得到音量的增益值
                vlf = float_from_gain(gain_minifloat_unpack_left(vlr));
                vrf = float_from_gain(gain_minifloat_unpack_right(vlr));//转换为浮点值
                // track volumes come from shared memory, so can't be trusted and must be clamped
        //判断是否在合理范围内
                if (vlf > GAIN_FLOAT_UNITY) {
                    ALOGV("Track left volume out of range: %.3g", vlf);
                    vlf = GAIN_FLOAT_UNITY;
                }
                if (vrf > GAIN_FLOAT_UNITY) {
                    ALOGV("Track right volume out of range: %.3g", vrf);
                    vrf = GAIN_FLOAT_UNITY;
                }
                const float vh = track->getVolumeHandler()->getVolume(
                        track->mAudioTrackServerProxy->framesReleased()).first;
                // now apply the master volume and stream type volume and shaper volume
                vlf *= v * vh;
                vrf *= v * vh;
                // assuming master volume and stream type volume each go up to 1.0,
                // then derive vl and vr as U8.24 versions for the effect chain
                const float scaleto8_24 = MAX_GAIN_INT * MAX_GAIN_INT;
                vl = (uint32_t) (scaleto8_24 * vlf);
                vr = (uint32_t) (scaleto8_24 * vrf);
                // vl and vr are now in U8.24 format
                uint16_t sendLevel = proxy->getSendLevel_U4_12();
                // send level comes from shared memory and so may be corrupt
                if (sendLevel > MAX_GAIN_INT) {
                    ALOGV("Track send level out of range: %04X", sendLevel);
                    sendLevel = MAX_GAIN_INT;
                }
                // vaf is represented as [0.0, 1.0] float by rescaling sendLevel
                vaf = v * sendLevel * (1. / MAX_GAIN_INT);
            }

            track->setFinalVolume((vrf + vlf) / 2.f);

            // XXX: these things DON'T need to be done each time
            mAudioMixer->setBufferProvider(name, track);
            mAudioMixer->enable(name);

            mAudioMixer->setParameter(name, param, AudioMixer::VOLUME0, &vlf);
            mAudioMixer->setParameter(name, param, AudioMixer::VOLUME1, &vrf);
            mAudioMixer->setParameter(name, param, AudioMixer::AUXLEVEL, &vaf);
 ...
        } else {
           ...
        }
        }   // local variable scope to avoid goto warning
    }

    return mixerStatus;
}

mActiveTracks记录了当前处于活跃状态的track,接着就是循环遍历每一个track进行处理,获取对应的音频参数。

audio_track_cblk_t是音频数据块,后面我们会扩展讲解。

在之后minFrames代表了此次音频播放所需要的最小帧数,他的初始值为1。当track->sharedBuffer() == 0的时候,说明这个AudioTrack不是STATIC模式(数据不是一次性传送完成的)。

getUnreleasedFrames用来获取音频缓冲区中尚未被音频硬件处理的帧数。

当我们计算出minFrames之后,就开始判断当前音频的各种指标是否符合标准。

vlf, vrf, vaf分别表示,左声道音量,右声道音量,AUX level音量,浮点数表示。

根据streamType获取对应stream类型音频的音量,然后进行判断是否在合理范围内,最终经过计算设置到AudioMixer对象中。当准备工作完成后,就进入到了真正的混音操作中了。

threadloop_mix:主要就是调用AudioMixer的process函数进行处理,这样就进入了audiomixer。

cpp 复制代码
void AudioFlinger::MixerThread::threadLoop_mix()
{
    // 启动混音
    mAudioMixer->process();
    mCurrentWriteLength = mSinkBufferSize;
    //当应用程序欠载情况清除时,逐步增加睡眠时间。
    //仅当混频器连续两次准备就绪时才增加睡眠时间,
    //以避免交替的就绪/未就绪条件的稳定状态保持睡眠时间,从而导致音频 HAL 欠载。
    if ((mSleepTimeUs == 0) && (sleepTimeShift > 0)) {
        sleepTimeShift--;
    }
    mSleepTimeUs = 0;
    mStandbyTimeNs = systemTime() + mStandbyDelayNs;
    //TODO: delay standby when effects have a tail

}

最后就是将数据写入HAL层了,threadloop_write。

当mNormalSink存在的时候调用他的write函数写入,不存在就调用mOutput的write函数,mOutput就是 AudioStreamOut。

cpp 复制代码
ssize_t AudioFlinger::PlaybackThread::threadLoop_write()
{
    ssize_t bytesWritten;
    // If an NBAIO sink is present, use it to write the normal mixer's submix
    if (mNormalSink != 0) {
        ssize_t framesWritten = mNormalSink->write((char *)mSinkBuffer + offset, count);
        ATRACE_END();
        if (framesWritten > 0) {
            bytesWritten = framesWritten * mFrameSize;
        } else {
            bytesWritten = framesWritten;
        }
    // otherwise use the HAL / AudioStreamOut directly
    } else {
        bytesWritten = mOutput->write((char *)mSinkBuffer + offset, mBytesRemaining);
    }
    return bytesWritten;
}

写入完成后调用各种清理的函数,remove,clear等。

cpp 复制代码
// Finally let go of removed track(s), without the lock held
// since we can't guarantee the destructors won't acquire that
// same lock.  This will also mutate and push a new fast mixer state.
threadLoop_removeTracks(tracksToRemove);
tracksToRemove.clear();

// FIXME I don't understand the need for this here;
//       it was in the original code but maybe the
//       assignment in saveOutputTracks() makes this unnecessary?
clearOutputTracks();

// Effect chains will be actually deleted here if they were removed from
// mEffectChains list during mixing or effects processing
effectChains.clear();
cpp 复制代码
void AudioFlinger::PlaybackThread::threadLoop_removeTracks(
        const Vector< sp<Track> >& tracksToRemove)
{
    size_t count = tracksToRemove.size();
    if (count > 0) {
        for (size_t i = 0 ; i < count ; i++) {
            const sp<Track>& track = tracksToRemove.itemAt(i);
            if (track->isExternalTrack()) {
                AudioSystem::stopOutput(mId, track->streamType(),
                                        track->sessionId());
                if (track->isTerminated()) {
                    AudioSystem::releaseOutput(mId, track->streamType(),
                                               track->sessionId());
                }
            }
        }
    }
}
相关推荐
小蜜蜂嗡嗡39 分钟前
Android Studio flutter项目运行、打包时间太长
android·flutter·android studio
aqi001 小时前
FFmpeg开发笔记(七十一)使用国产的QPlayer2实现双播放器观看视频
android·ffmpeg·音视频·流媒体
zhangphil2 小时前
Android理解onTrimMemory中ComponentCallbacks2的内存警戒水位线值
android
你过来啊你3 小时前
Android View的绘制原理详解
android
移动开发者1号5 小时前
使用 Android App Bundle 极致压缩应用体积
android·kotlin
移动开发者1号6 小时前
构建高可用线上性能监控体系:从原理到实战
android·kotlin
ii_best10 小时前
按键精灵支持安卓14、15系统,兼容64位环境开发辅助工具
android
美狐美颜sdk11 小时前
跨平台直播美颜SDK集成实录:Android/iOS如何适配贴纸功能
android·人工智能·ios·架构·音视频·美颜sdk·第三方美颜sdk
恋猫de小郭15 小时前
Meta 宣布加入 Kotlin 基金会,将为 Kotlin 和 Android 生态提供全新支持
android·开发语言·ios·kotlin
aqi0016 小时前
FFmpeg开发笔记(七十七)Android的开源音视频剪辑框架RxFFmpeg
android·ffmpeg·音视频·流媒体