ExoPlayer架构详解与源码分析(15)——Renderer

系列文章目录


前言

大家都在电影院看过电影,以前的电影采用胶卷拍摄,剪辑(物理意义上的),复制,然后再送到各个影院播放。 默片时代胶卷只包含图像信息,所以放映机不需要考虑同步问题,但这也不意味着不需要音画同步。大家经常看的猫和老鼠就是默片,你会说明明就有声音,其实那些背景交响乐都是交先乐团人工现场演奏的,指挥盯着电影屏幕根据预设指挥演奏。这里可以看成是最原始的音画同步,人工智能实时同步,实时纠错。

后来有声电影出现,曾经风光一时的交响乐演奏从业者大批失业(历史不断重演,像不像现在的AI)。有声电影的胶卷不光承载了视频信息还承载了音频信息,早期的音频信息就写在胶卷的侧面如下图的模拟声轨,声轨也是物理意义上的声轨。胶片投影的位置下方会有一个光电传感器,将图像声轨信息转化成电信号最终放大出声音。这个光电传感器是可以移动位置的,胶卷的两侧孔洞叫做齿孔,故名思意就是齿轮会卡在上面,齿孔与放映机的齿轮啮合,以恒定的速度移动胶片保证音画同步。某些情况下会发生音画不同步的情况,这个时候通过调整声轨光电传感器的位置来微调,离胶片投影越远声音就延后,越近声音就提前。

数字时代在模拟声轨基础上有了数字声轨,下图的杜比声轨可以想象成一个二维码里面存储了声音的数字信息,通过读取解码就可以获取当前的声音,同时为了保证音画同步也不需要以前那么复杂了,胶卷上同时还存储了DTS时间相关信息,声轨中同样也有时间戳,数字播放机通过比较两者保证时间同步。 现在大部分电影院都是数字片使用播放软硬件播放胶卷不复存在(除了最近还整出了个IMAX胶片的诺兰),计算机时代,随手一部手机就可以解4K电影,软件播放器就可以保证音画同步,ExoPlayer当然也可以,这主要归功于今天要说的另一大组件Renderer,可以看出电影历史的一个切面就是音画同步史,而Renderer的核心内容就是同步。

如果你已经看完理解了前面MediaSource的内容,我相信你已经知道数据是如何获取并解析好放入到缓存了,我们先跳过中间那些控制管理环节,这些数据最终流入的方向就是Renderer。可以把Renderer想象成火箭的涡轮发动机,从MediaSource那源源不断的获取燃料,在发动机里点火燃烧,为火箭升空提供强大的动力。和火箭一样要想升空发动机必须平稳,在火箭运行的不同时间精确的执行预设好的动作。这就需要一个良好的时间同步设计。

Renderer

渲染从SampleStream读取的媒体。 在内部,渲染器的生命周期由所属的ExoPlayer管理。随着整体播放状态和启用的轨道的变化,渲染器会在各种状态之间转换。有效的状态转换如下所示,并用每次转换期间调用的方法进行注释。

看下主要方法

  • init 初始化Renderer,入参index为当前Renderer在所有Renderer中的索引,入参playerId为当前播放器的ID
  • enable 使渲染器能够使用传入的SampleStream,当Renderer 处于Disabled状态是才可能被调用
  • start 启动渲染器,这意味着对render的调用将导致媒体被渲染。当Renderer 处于Enable状态时才能调用此方法
  • render 增量渲染SampleStream 。当渲染器处于以下Enable、 Started状态时可以调用此方法 。
  • replaceStream 替换SampleStream 。当渲染器处于以下Enable、 Started状态时可以调用此方法 。

Renderer对象创建完成一般先调init 方法初始化,然后调用enable 传入SampleStream,enable内部会调用replaceStream 初始化SampleStream,之后调用start 将状态置为Started,最后调用render方法开始渲染

再看下Renderer模块的整体结构

Renderer直接由抽象类BaseRenderer实现,下面的MediaCodecRenderer(音视频)、TextRenderer(字幕)、MetadataRenderer(Meta信息)、CameraMotionRenderer(镜头信息,用于VR全景之类的数据)对应各种类型轨道的渲染器。本文篇幅有限,主要介绍音视频也就是MediaCodecRenderer,其他的Renderer感兴趣的可以自行研究。可以看到MediaCodecRenderer下又分视频Video和音频Audio两大块,视频最终交给Android系统的MediaCodec来处理,而音频最终交由Android系统的AudioTrack处理。

BaseRenderer

Renderer的直接实现类,主要用于一些状态的控制存储,和一些全局变量的管理

java 复制代码
  @Override
  public final void init(int index, PlayerId playerId) {
    this.index = index;
    this.playerId = playerId;
  }
  
  @Override
  public final void enable(
      RendererConfiguration configuration,//renderer配置信息
      Format[] formats,//轨道信息
      SampleStream stream,//待渲染数据
      long positionUs,//当前播放位置
      boolean joining,//是否启用此渲染器来加入正在进行的播放
      boolean mayRenderStartOfStream,//即使状态尚未STATE_STARTED ,是否允许此渲染器渲染流的开头。
      long startPositionUs,//渲染的开始位置
      long offsetUs)//在渲染之前添加到从stream读取的缓冲区时间戳的偏移量。
      throws ExoPlaybackException {
    Assertions.checkState(state == STATE_DISABLED);
    this.configuration = configuration;
    state = STATE_ENABLED;
    onEnabled(joining, mayRenderStartOfStream);//调用子类
    replaceStream(formats, stream, startPositionUs, offsetUs);
    resetPosition(positionUs, joining);
  }
  
  @Override
  public final void replaceStream(
      Format[] formats, SampleStream stream, long startPositionUs, long offsetUs)
      throws ExoPlaybackException {
    Assertions.checkState(!streamIsFinal);
    this.stream = stream;//替换当前的全局SampleSteam
    if (readingPositionUs == C.TIME_END_OF_SOURCE) {
      readingPositionUs = startPositionUs;
    }
    streamFormats = formats;
    streamOffsetUs = offsetUs;
    onStreamChanged(formats, startPositionUs, offsetUs);//子类实现
  }
  
  @Override
  public final void start() throws ExoPlaybackException {
    Assertions.checkState(state == STATE_ENABLED);
    state = STATE_STARTED;//改变状态
    onStarted();//子类实现
  }
  //BaseRenderer还提供了readSource方法,用于读取Sample中的数据
  //readFlags知道当前需要获取的数据类型
  protected final @ReadDataResult int readSource(
      FormatHolder formatHolder, DecoderInputBuffer buffer, @ReadFlags int readFlags) {
    @ReadDataResult
    //这里的stream最终从上文讲的SampleQueue中获取数据
    int result = Assertions.checkNotNull(stream).readData(formatHolder, buffer, readFlags);
    if (result == C.RESULT_BUFFER_READ) {//当前获取的是BUFFER数据
      if (buffer.isEndOfStream()) {
        readingPositionUs = C.TIME_END_OF_SOURCE;
        return streamIsFinal ? C.RESULT_BUFFER_READ : C.RESULT_NOTHING_READ;
      }
      buffer.timeUs += streamOffsetUs;
      readingPositionUs = max(readingPositionUs, buffer.timeUs);
    } else if (result == C.RESULT_FORMAT_READ) {//当前获取的是Format数据
      Format format = Assertions.checkNotNull(formatHolder.format);
      if (format.subsampleOffsetUs != Format.OFFSET_SAMPLE_RELATIVE) {
        format =
            format
                .buildUpon()
                .setSubsampleOffsetUs(format.subsampleOffsetUs + streamOffsetUs)
                .build();
        formatHolder.format = format;
      }
    }
    return result;
  }

BaseRenderer的实现比较简单,重点看下它的子类,这里主要学习下MediaCodecRenderer,在看MediaCodecRenderer前得先了解下Android系统的MediaCodec。

MediaCodec

MediaCodec 可���于访问低级媒体编解码器,即编码器/解码器组件。它是 Android 低级多媒体支持基础设施的一部分,这里主要了解下解码过程,不知道还有没有读者记得这张在讲SampleQueue里出现的图了

解码器的作用就是处理输入的编码数据输出解码后的数据。它使用一组输入和输出缓冲区异步处理数据。首先,调用者初始化MediaCodec.configure ,向MediaCodec.dequeueInputBuffer 请求一个空的输入缓冲区,将其他地方读取到的编码数据填充输入缓冲区,queueInputBuffer 将其发送给MediaCodec进行处理。MediaCodec获取的输入缓冲数据进行解码,完成后将解码后的数据输出至输出缓冲区。最后,调用者向MediaCodec.dequeueOutputBuffer 请求已填充的输出缓冲区,调用者将获取输出缓冲区的解码数据将其渲染到指定地方,使用完成后releaseOutputBuffer将其释放回MediaCodec。如果在MediaCodec.configure传入了Surface,releaseOutputBuffer后会将解码数据直接渲染到传入的Surface上。

在MediaCodec生命周期中,存在以下三种状态:Stopped、Executing 、 Released。 Stopped 状态实际上是三个状态的组合:Uninitialized、Configured 和 Error,而 Executing 状态会经历三个子状态:Flushed、Running 和 End-of-Stream。

当创建MediaCodec时,编解码器处于Uninitialized状态。首先,您需要通过configure 对其进行配置,这会将其置于Configured 状态,然后调用start 将其置于Executing 状态。在Executing 状态下,就可以通过上述缓冲区队列操作来处理数据了。 Executing 状态具有三个子状态:Flushed、Running 和 End-of-Stream。在 start之后,MediaCodec立即处于 Flushed 子状态,其中保存所有缓冲区。一旦第一个输入缓冲区出队,编解码器就会进入Running 子状态,大部分时间会执行在此状态下。当使用BUFFER_FLAG_END_OF_STREAM Flag标记进行MediaCodec.queueInputBuffer 时,MediaCodec将转换到End-of-Stream子状态。在此状态下,编解码器不再接受更多输入缓冲区,但仍生成输出缓冲区,直到输出到达流末尾。对于解码器,可以在处于 Executing 状态时随时使用 flash 返回到 Flushed 子状态。 调用 stop 将编解码器返回到Uninitialized状态,然后可以再次configure 它。使用完编解码器后,必须通过调用release来释放它。 有了上面的知识,来看看看MediaCodecRenderer是如何使用这些方法,完成整个渲染的。

MediaCodecRenderer

MediaCodecRenderer主要是通过Android的MediaCodec来渲染解码渲染出音视频内容,主要有2个子类MediaCodecVideoRenderer和MediaCodecAudioRenderer。直接看下render的实现

java 复制代码
  @Override
  //positionUs为当前的播放时间戳,如果有音轨会获取音轨的PTS
  //elapsedRealtimeUs循环调用render开始前的时间戳,组要用来计算程序的执行时长
  public void render(long positionUs, long elapsedRealtimeUs) throws ExoPlaybackException {
    ...
      // We have a format.
      maybeInitCodecOrBypass();
      //如果是直出的也就是不需要Codec解码的数据
      if (bypassEnabled) {
        TraceUtil.beginSection("bypassRender");
        //
        while (bypassRender(positionUs, elapsedRealtimeUs)) {}
        TraceUtil.endSection();
      } else if (codec != null) {//需要通过MeidaCodec解码的数据
      //记录循环开始时间,用于计算执行时间是否超过renderTimeLimitMs,决定是否继续循环
        long renderStartTimeMs = SystemClock.elapsedRealtime();
        TraceUtil.beginSection("drainAndFeed");
        //先从MediaCodec中获取已解码数据
        while (drainOutputBuffer(positionUs, elapsedRealtimeUs)
            && shouldContinueRendering(renderStartTimeMs)) {}
        //向MediaCodec输入待解码数据
        while (feedInputBuffer() && shouldContinueRendering(renderStartTimeMs)) {}
        TraceUtil.endSection();
      }
     ...
  }
  
  protected final void maybeInitCodecOrBypass() throws ExoPlaybackException {
   ...

    if (isBypassPossible(inputFormat)) {
      initBypass(inputFormat);//对于不需要Codec解码的数据直接,初始化Bypass主要就是初始化Byapass的buffer:bypassBatchBuffer
      return;
    }
   ...
      //初始化MediaCodec
      maybeInitCodecWithFallback(mediaCrypto, mediaCryptoRequiresSecureDecoder);
    ...
  }
  private void maybeInitCodecWithFallback(
      @Nullable MediaCrypto crypto, boolean mediaCryptoRequiresSecureDecoder)
      throws DecoderInitializationException {
    if (availableCodecInfos == null) {
      try {
        //通过输入的数据的Meta信息获取用于初始化Codec的相关数据
        List<MediaCodecInfo> allAvailableCodecInfos =
            getAvailableCodecInfos(mediaCryptoRequiresSecureDecoder);
       ...
    }

    ...
          //开始初始化Codec
          initCodec(codecInfo, crypto);
...
  }
  
  private void initCodec(MediaCodecInfo codecInfo, @Nullable MediaCrypto crypto) throws Exception {
    //通过子类获取MediaCodecAdapter.Configuration
    MediaCodecAdapter.Configuration configuration =
        getMediaCodecConfiguration(codecInfo, inputFormat, crypto, codecOperatingRate);
    ...
    //创建出MediaCodecAdapter,这里的Adapter主要有2个实现,一个是SynchronousMediaCodecAdapter 通过同步的方式调用MediaCodec,一个是针对API23的异步MeidaCodec调用的AsynchronousMediaCodecAdapter
    try {
      TraceUtil.beginSection("createCodec:" + codecName);
      codec = codecAdapterFactory.createAdapter(configuration);
    } finally {
      TraceUtil.endSection();
    }
    ...
  }
  //这里为了方便看下SynchronousMediaCodecAdapter 
  public MediaCodecAdapter createAdapter(Configuration configuration) throws IOException {
      @Nullable MediaCodec codec = null;
      try {
        //主要通过MediaCodec.createByCodecName(codecName)创建出MediaCodec
        codec = createCodec(configuration);
        TraceUtil.beginSection("configureCodec");
        //配置MediaCodec
        codec.configure(
        //格式,其中KEY_MAX_INPUT_SIZE确定了缓冲区的大小,对应于format.maxInputSize,可以查看计算逻辑
            configuration.mediaFormat,
            configuration.surface,//渲染的surface
            configuration.crypto,
            configuration.flags);
        TraceUtil.endSection();
        TraceUtil.beginSection("startCodec");
        //到这里MediaCodec就已经准备好了随时可以用来解码了
        codec.start();
        TraceUtil.endSection();
        return new SynchronousMediaCodecAdapter(codec);
      } catch (IOException | RuntimeException e) {
        if (codec != null) {
          codec.release();
        }
        throw e;
      }
    }
    //直出数据渲染
    private boolean bypassRender(long positionUs, long elapsedRealtimeUs)
      throws ExoPlaybackException {

     ...
     //有数据后调用子类processOutputBuffer渲染数据
    if (bypassBatchBuffer.hasSamples()) {
      if (processOutputBuffer(//这里调用MediaCodecAudioRenderer
          positionUs,
          elapsedRealtimeUs,
          /* codec= */ null,
          bypassBatchBuffer.data,
          outputIndex,
          /* bufferFlags= */ 0,
          bypassBatchBuffer.getSampleCount(),
          bypassBatchBuffer.getFirstSampleTimeUs(),
          bypassBatchBuffer.isDecodeOnly(),
          bypassBatchBuffer.isEndOfStream(),
          outputFormat)) {
        // The batch buffer has been fully processed.
        onProcessedOutputBuffer(bypassBatchBuffer.getLastSampleTimeUs());
        bypassBatchBuffer.clear();
      } else {
        // Could not process the whole batch buffer. Try again later.
        return false;
      }
    }
    ...
    // 第一次先读取Sample数据到bypassBatchBuffer
    bypassRead();
...
  }
  
  private boolean drainOutputBuffer(long positionUs, long elapsedRealtimeUs)
      throws ExoPlaybackException {
    //如果成功渲染了OutputBuffer,OutputBuffer会重置,这里会hasOutputBuffer=false会拉取下一段Buffer继续执行
    //如果当前的Buffer因某种原因,如渲染过快需要等待,这个时候OutputBuffer还是上次未渲染的数据
    if (!hasOutputBuffer()) {
      int outputIndex;
      ...
        //获取解码后的OutputBuffer的索引
        outputIndex = codec.dequeueOutputBufferIndex(outputBufferInfo);
      }

      if (outputIndex < 0) {//异常情况
        //格式变更
        if (outputIndex == MediaCodec.INFO_OUTPUT_FORMAT_CHANGED /* (-2) */) {
          processOutputMediaFormatChanged();
          return true;
        }
        // MediaCodec.INFO_TRY_AGAIN_LATER (-1) or unknown negative return value.
        if (codecNeedsEosPropagation
            && (inputStreamEnded || codecDrainState == DRAIN_STATE_WAIT_END_OF_STREAM)) {
          processEndOfStream();
        }
        return false;
      }

      // 跳过特殊机型的适配数据
      if (shouldSkipAdaptationWorkaroundOutputBuffer) {
        shouldSkipAdaptationWorkaroundOutputBuffer = false;
        codec.releaseOutputBuffer(outputIndex, false);
        return true;
      } else if (outputBufferInfo.size == 0
          && (outputBufferInfo.flags & MediaCodec.BUFFER_FLAG_END_OF_STREAM) != 0) {
        // dequeued buffer 标记了结束Flag,立即结束
        processEndOfStream();
        return false;
      }

      this.outputIndex = outputIndex;//更新全局的outputIndex
      //通过outputIndex 获取OutputBuffer
      outputBuffer = codec.getOutputBuffer(outputIndex);

      //根据outputBufferInfo初始化outputBuffer
      if (outputBuffer != null) {
        outputBuffer.position(outputBufferInfo.offset);
        outputBuffer.limit(outputBufferInfo.offset + outputBufferInfo.size);
      }
     ...
    }

    boolean processedOutputBuffer;
    if (codecNeedsEosOutputExceptionWorkaround && codecReceivedEos) {
      processedOutputBuffer =
          processOutputBuffer(//调用子类处理OutputBuffer
              positionUs,
              elapsedRealtimeUs,
              codec,
              outputBuffer,
              outputIndex,
              outputBufferInfo.flags,
              /* sampleCount= */ 1,
              outputBufferInfo.presentationTimeUs,
              isDecodeOnlyOutputBuffer,
              isLastOutputBuffer,
              outputFormat);
    }
    //成功渲染了当前OutputBuffer
    if (processedOutputBuffer) {
      onProcessedOutputBuffer(outputBufferInfo.presentationTimeUs);
      boolean isEndOfStream = (outputBufferInfo.flags & MediaCodec.BUFFER_FLAG_END_OF_STREAM) != 0;
      resetOutputBuffer();//重置OutputBuffer
      if (!isEndOfStream) {
        return true;
      }
      processEndOfStream();
    }
    //否则返回false中止drainOutputBuffer,进入feedInputBuffer
    return false;
  }
  //向MediaCodec输入数据
  private boolean feedInputBuffer() throws ExoPlaybackException {
    ...
    if (inputIndex < 0) {//如果已经resetInputBuffer
    //获取InputBuffer索引
      inputIndex = codec.dequeueInputBufferIndex();
      //这里可能获取不到,有可能MediaCodec的缓存已经满了,此时就不再读取数据输入了
      //这个MediaCodec最大的缓存大小是在MediaCodec初始化时传入Format时确定的
      if (inputIndex < 0) {
        return false;
      }
      //通过索引获取InputBuffer
      buffer.data = codec.getInputBuffer(inputIndex);
      //清空数据
      buffer.clear();
    }

    //需要消耗当前InputBuffer
    if (codecDrainState == DRAIN_STATE_SIGNAL_END_OF_STREAM) {
      // We need to re-initialize the codec. Send an end of stream signal to the existing codec so
      // that it outputs any remaining buffers before we release it.
      if (codecNeedsEosPropagation) {
        // Do nothing.
      } else {
        codecReceivedEos = true;
        codec.queueInputBuffer(inputIndex, 0, 0, 0, MediaCodec.BUFFER_FLAG_END_OF_STREAM);
        resetInputBuffer();
      }
      codecDrainState = DRAIN_STATE_WAIT_END_OF_STREAM;
      return false;
    }

    //特殊适配,在InputBuffer前加入三个H.264 NAL单元:SPS、PPS和 32 * 32 像素 IDR slice,可以强制Format更新
    if (codecNeedsAdaptationWorkaroundBuffer) {
      codecNeedsAdaptationWorkaroundBuffer = false;
      buffer.data.put(ADAPTATION_WORKAROUND_BUFFER);
      codec.queueInputBuffer(inputIndex, 0, ADAPTATION_WORKAROUND_BUFFER.length, 0, 0);
      resetInputBuffer();
      codecReceivedBuffers = true;
      return true;
    }

    //对于自适应重配置,解码器期望在缓冲区的开头提供重配置数据
    if (codecReconfigurationState == RECONFIGURATION_STATE_WRITE_PENDING) {
      for (int i = 0; i < codecInputFormat.initializationData.size(); i++) {
        byte[] data = codecInputFormat.initializationData.get(i);
        buffer.data.put(data);
      }
      codecReconfigurationState = RECONFIGURATION_STATE_QUEUE_PENDING;
    }
    int adaptiveReconfigurationBytes = buffer.data.position();

    FormatHolder formatHolder = getFormatHolder();

    @SampleStream.ReadDataResult int result;
    try {
    //开始读取数据到InputBuffer里
      result = readSource(formatHolder, buffer, /* readFlags= */ 0);
    } catch (InsufficientCapacityException e) {
      onCodecError(e);
      //对于过大的Sample,直接读取Mate信息跳过数据Sample读取
      readSourceOmittingSampleData(/* readFlags= */ 0);
      flushCodec();
      return true;
    }
...
    if (result == C.RESULT_FORMAT_READ) {//读取的是Mate信息
      if (codecReconfigurationState == RECONFIGURATION_STATE_QUEUE_PENDING) {
        // We received two formats in a row. Clear the current buffer of any reconfiguration data
        // associated with the first format.
        buffer.clear();
        codecReconfigurationState = RECONFIGURATION_STATE_WRITE_PENDING;
      }
      onInputFormatChanged(formatHolder);
      return true;
    }

    ...
    long presentationTimeUs = buffer.timeUs;//获取PTS

    ...
    largestQueuedPresentationTimeUs = max(largestQueuedPresentationTimeUs, presentationTimeUs);
    buffer.flip();//切换为read
    if (buffer.hasSupplementalData()) {
      handleInputBufferSupplementalData(buffer);
    }

    onQueueInputBuffer(buffer);
    try {
      //将包含未解码数据的InputBuffer传给MediaCodec解码
      if (bufferEncrypted) {
        codec.queueSecureInputBuffer(
            inputIndex, /* offset= */ 0, buffer.cryptoInfo, presentationTimeUs, /* flags= */ 0);
      } else {
        codec.queueInputBuffer(
            inputIndex, /* offset= */ 0, buffer.data.limit(), presentationTimeUs, /* flags= */ 0);
      }
    } catch (CryptoException e) {
      throw createRendererException(
          e, inputFormat, Util.getErrorCodeForMediaDrmErrorCode(e.getErrorCode()));
    }

    resetInputBuffer();//重置InputBuffer,下次会读取新的InputBuffer
    codecReceivedBuffers = true;
    codecReconfigurationState = RECONFIGURATION_STATE_NONE;
    decoderCounters.queuedInputBufferCount++;
    return true;
  }

processOutputBuffer交由子类实现也就是MediaCodecVideoRenderer和MediaCodecAudioRenderer,MediaCodecAudioRenderer实现相对MediaCodecVideoRenderer简单

MediaCodecAudioRenderer

音频渲染器,主要通过Android系统的AudioTrack实现音频播放

java 复制代码
  @Override
  protected boolean processOutputBuffer(
      long positionUs,
      long elapsedRealtimeUs,
      @Nullable MediaCodecAdapter codec,
      @Nullable ByteBuffer buffer,
      int bufferIndex,
      int bufferFlags,
      int sampleCount,
      long bufferPresentationTimeUs,
      boolean isDecodeOnlyBuffer,
      boolean isLastBuffer,
      Format format)
      throws ExoPlaybackException {
    checkNotNull(buffer);

    if (decryptOnlyCodecFormat != null//包含编解码器初始化/编解码器特定数据而不是媒体数据,直接release掉
        && (bufferFlags & MediaCodec.BUFFER_FLAG_CODEC_CONFIG) != 0) {
      // Discard output buffers from the passthrough (raw) decoder containing codec specific data.
      checkNotNull(codec).releaseOutputBuffer(bufferIndex, false);
      return true;
    }

    if (isDecodeOnlyBuffer) {//无需渲染的数据直接release掉
      if (codec != null) {
        codec.releaseOutputBuffer(bufferIndex, false);
      }
      decoderCounters.skippedOutputBufferCount += sampleCount;
      audioSink.handleDiscontinuity();
      return true;
    }

    boolean fullyConsumed;
    try {
      //开始渲染出数据,调用DefaultAudioSink播放这些数据
      fullyConsumed = audioSink.handleBuffer(buffer, bufferPresentationTimeUs, sampleCount);
    } catch (InitializationException e) {
      throw createRendererException(
          e, inputFormat, e.isRecoverable, PlaybackException.ERROR_CODE_AUDIO_TRACK_INIT_FAILED);
    } catch (WriteException e) {
      throw createRendererException(
          e, format, e.isRecoverable, PlaybackException.ERROR_CODE_AUDIO_TRACK_WRITE_FAILED);
    }

    if (fullyConsumed) {//渲染完毕,release
      if (codec != null) {
        codec.releaseOutputBuffer(bufferIndex, false);
      }
      decoderCounters.renderedOutputBufferCount += sampleCount;
      return true;
    }

    return false;
  }
  
  //DefaultAudioSink
  @Override
  @SuppressWarnings("ReferenceEquality")
  public boolean handleBuffer(
      ByteBuffer buffer, long presentationTimeUs, int encodedAccessUnitCount)
      throws InitializationException, WriteException {
    ...
    //初始化创建出AudioTrack对象
    if (!isAudioTrackInitialized()) {
      try {
        if (!initializeAudioTrack()) {
          // Not yet ready for initialization of a new AudioTrack.
          return false;
        }
      } catch (InitializationException e) {
        if (e.isRecoverable) {
          throw e; // Do not delay the exception if it can be recovered at higher level.
        }
        initializationExceptionPendingExceptionHolder.throwExceptionIfDeadlineIsReached(e);
        return false;
      }
    }
    initializationExceptionPendingExceptionHolder.clear();

    if (startMediaTimeUsNeedsInit) {//首次执行
      startMediaTimeUs = max(0, presentationTimeUs);
      startMediaTimeUsNeedsSync = false;
      startMediaTimeUsNeedsInit = false;

      if (useAudioTrackPlaybackParams()) {
        setAudioTrackPlaybackParametersV23();
      }
      applyAudioProcessorPlaybackParametersAndSkipSilence(presentationTimeUs);
      //audioTrack.play()
      if (playing) {
        play();
      }
    }
...

      // 校验 presentationTimeUs
      long expectedPresentationTimeUs =
          startMediaTimeUs//播放开始时间
              + configuration.inputFramesToDurationUs(//帧数除以音频采样率计算出到这一帧的标准时长
                  getSubmittedFrames() - trimmingAudioProcessor.getTrimmedFrameCount());
      if (!startMediaTimeUsNeedsSync//和计算的标准时间相差了200ms
          && Math.abs(expectedPresentationTimeUs - presentationTimeUs) > 200000) {
        if (listener != null) {
          listener.onAudioSinkError(
              new AudioSink.UnexpectedDiscontinuityException(
                  presentationTimeUs, expectedPresentationTimeUs));
        }
        startMediaTimeUsNeedsSync = true;//标记开始时间需要同步
      }
      if (startMediaTimeUsNeedsSync) {//同步startMediaTimeUs
        if (!drainToEndOfStream()) {
          // Don't update timing until pending AudioProcessor buffers are completely drained.
          return false;
        }
        // 开始调整startMediaTimeUs
        //获取时间差
        long adjustmentUs = presentationTimeUs - expectedPresentationTimeUs;
        //重新设置startMediaTimeUs
        startMediaTimeUs += adjustmentUs;
        startMediaTimeUsNeedsSync = false;
        // Re-apply playback parameters because the startMediaTimeUs changed.
        applyAudioProcessorPlaybackParametersAndSkipSilence(presentationTimeUs);
        if (listener != null && adjustmentUs != 0) {
          listener.onPositionDiscontinuity();
        }
      }
      //总提交帧数增加
      if (configuration.outputMode == OUTPUT_MODE_PCM) {
        submittedPcmBytes += buffer.remaining();
      } else {
        submittedEncodedFrames += (long) framesPerEncodedSample * encodedAccessUnitCount;
      }

      inputBuffer = buffer;
      inputBufferAccessUnitCount = encodedAccessUnitCount;
    }
    //最终调用audioTrack.write写入数据,完成音频输出
    processBuffers(presentationTimeUs);

    if (!inputBuffer.hasRemaining()) {
      inputBuffer = null;
      inputBufferAccessUnitCount = 0;
      return true;
    }

    if (audioTrackPositionTracker.isStalled(getWrittenFrames())) {
      Log.w(TAG, "Resetting stalled audio track");
      flush();
      return true;
    }

    return false;
  }

可以看到MediaCodecAudioRenderer直接使用了输入的bufferPresentationTimeUs作为PTS将音频输出,期间没有进行过调整,只是调整了startMediaTimeUs 开始时间,所以实现简单几乎不涉及任何的时间同步代码。这里可以确定Exoplayer可能采用音频的PTS作为参考时钟,在播放视频时,以音频时钟为准将视频时间同步到音频上。下面就证实下上面的猜想,看下MediaCodecVideoRenderer的实现。

MediaCodecVideoRenderer

视频数据在调用MediaCodec.releaseOutputBuffer后就会渲染到指定的Surface上,这个过程就主要执行在MediaCodecVideoRenderer里

java 复制代码
@Override
  protected boolean processOutputBuffer(
      long positionUs,//参考时钟的播放位置,对应Audio的PTS
      long elapsedRealtimeUs,//循环调用render开始前的时间戳,主要用来计算程序的执行时长
      @Nullable MediaCodecAdapter codec,
      @Nullable ByteBuffer buffer,
      int bufferIndex,
      int bufferFlags,
      int sampleCount,
      long bufferPresentationTimeUs,//视频流的PTS
      boolean isDecodeOnlyBuffer,
      boolean isLastBuffer,
      Format format)
      throws ExoPlaybackException {
    checkNotNull(codec); // 视频必须要codec解码

    if (initialPositionUs == C.TIME_UNSET) {
      initialPositionUs = positionUs;//第一次初始化位置赋值
    }
    //更新上一次的bufferPresentationTimeUs
    if (bufferPresentationTimeUs != lastBufferPresentationTimeUs) {
      if (!videoFrameProcessorManager.isEnabled()) {
        frameReleaseHelper.onNextFrame(bufferPresentationTimeUs);
      } // else, update the frameReleaseHelper when releasing the processed frames.
      this.lastBufferPresentationTimeUs = bufferPresentationTimeUs;
    }
    //获取流开始PTS
    long outputStreamOffsetUs = getOutputStreamOffsetUs();
    //当前的PTS-开始PTS=PTS时长
    long presentationTimeUs = bufferPresentationTimeUs - outputStreamOffsetUs;

    if (isDecodeOnlyBuffer && !isLastBuffer) {
      skipOutputBuffer(codec, bufferIndex, presentationTimeUs);
      return true;
    }

    // Note: Use of double rather than float is intentional for accuracy in the calculations below.
    boolean isStarted = getState() == STATE_STARTED;
    //获取当前系统时间
    long elapsedRealtimeNowUs = SystemClock.elapsedRealtime() * 1000;
    long earlyUs =//提前时长=使用当前流的PTS-参考时钟-程序执行时长
        calculateEarlyTimeUs(
            positionUs,
            elapsedRealtimeUs,
            elapsedRealtimeNowUs,
            bufferPresentationTimeUs,
            isStarted);

    if (displaySurface == placeholderSurface) {
      // Skip frames in sync with playback, so we'll be at the right frame if the mode changes.
      if (isBufferLate(earlyUs)) {
        skipOutputBuffer(codec, bufferIndex, presentationTimeUs);
        updateVideoFrameProcessingOffsetCounters(earlyUs);
        return true;
      }
      return false;
    }
    //当前帧已经延迟超过30ms(earlyUs<-30000),且距离上次渲染时间已经超过了100ms,此时画面是静止的,需要强制去渲染当前帧
    boolean forceRenderOutputBuffer = shouldForceRender(positionUs, earlyUs);
    if (forceRenderOutputBuffer) {//强制渲染场景
      boolean notifyFrameMetaDataListener;
      if (videoFrameProcessorManager.isEnabled()) {
        notifyFrameMetaDataListener = false;
        if (!videoFrameProcessorManager.maybeRegisterFrame(
            format, presentationTimeUs, isLastBuffer)) {
          return false;
        }
      } else {
        notifyFrameMetaDataListener = true;
      }
      renderOutputBufferNow(//开始渲染
          codec, format, bufferIndex, presentationTimeUs, notifyFrameMetaDataListener);
      updateVideoFrameProcessingOffsetCounters(earlyUs);
      return true;
    }

    if (!isStarted || positionUs == initialPositionUs) {
      return false;
    }

    // 计算提交给Codec也就是releaseOutputBuffer时,指定的送显时间戳
    long systemTimeNs = System.nanoTime();
    //当前时间+提前的时长
    long unadjustedFrameReleaseTimeNs = systemTimeNs + (earlyUs * 1000);

    // 进一步调整精确送显时间戳,后面会看到具体代码
    long adjustedReleaseTimeNs = frameReleaseHelper.adjustReleaseTime(unadjustedFrameReleaseTimeNs);
    if (!videoFrameProcessorManager.isEnabled()) {//使用精确的送显时间重新计算earlyUs
      earlyUs = (adjustedReleaseTimeNs - systemTimeNs) / 1000;
    }
    //丢帧逻辑
    boolean treatDroppedBuffersAsSkipped = joiningDeadlineMs != C.TIME_UNSET;
    if (shouldDropBuffersToKeyframe(earlyUs, elapsedRealtimeUs, isLastBuffer)
        && maybeDropBuffersToKeyframe(positionUs, treatDroppedBuffersAsSkipped)) {
      return false;
    } else if (shouldDropOutputBuffer(earlyUs, elapsedRealtimeUs, isLastBuffer)) {
      if (treatDroppedBuffersAsSkipped) {
        skipOutputBuffer(codec, bufferIndex, presentationTimeUs);
      } else {
        dropOutputBuffer(codec, bufferIndex, presentationTimeUs);
      }
      updateVideoFrameProcessingOffsetCounters(earlyUs);
      return true;
    }
   ...

    if (Util.SDK_INT >= 21) {// 大于等于21,这里直接传入送显时间,让Codec决定什么时候送显
      if (earlyUs < 50000) {//舍弃提前太多的帧,最多渲染提前50ms送显的帧
        if (adjustedReleaseTimeNs == lastFrameReleaseTimeNs) {
          //2次送显时间一致,说明渲染速率要比显示器刷新率快,尽快跳过当前帧,保证渲染速率
          skipOutputBuffer(codec, bufferIndex, presentationTimeUs);
        } else {
          //触发送显的监听
          notifyFrameMetadataListener(presentationTimeUs, adjustedReleaseTimeNs, format);
          //使用adjustedReleaseTimeNs送显时间releaseOutputBuffer
          renderOutputBufferV21(codec, bufferIndex, presentationTimeUs, adjustedReleaseTimeNs);
        }
        updateVideoFrameProcessingOffsetCounters(earlyUs);
        lastFrameReleaseTimeNs = adjustedReleaseTimeNs;
        return true;
      }
    } else {
      // 21以下系统需要自己控制送显时间,releaseOutputBuffer不支持传入送显时间
      if (earlyUs < 30000) {//舍弃提前太多的帧,最多渲染提前30ms送显的帧,至于为啥是30ms,问就是感觉
        if (earlyUs > 11000) {//如果在11m到30ms之间,还是有点早,需要阻塞等待
          // Note: The 11ms threshold was chosen fairly arbitrarily.//11ms没有太多依据,凭感觉
          try {
            // 保证至少1ms
            Thread.sleep((earlyUs - 10000) / 1000);
          } catch (InterruptedException e) {
            Thread.currentThread().interrupt();
            return false;
          }
        }
        //触发送显的监听
        notifyFrameMetadataListener(presentationTimeUs, adjustedReleaseTimeNs, format);
        //低于11m的就直接送显了
        renderOutputBuffer(codec, bufferIndex, presentationTimeUs);//直接送显
        updateVideoFrameProcessingOffsetCounters(earlyUs);
        return true;
      }
    }

    // 返回false可能当前还未播放或者还没到渲染这帧的时间
    return false;
  }
  //计算提前时长
  private long calculateEarlyTimeUs(
      long positionUs,
      long elapsedRealtimeUs,
      long elapsedRealtimeNowUs,
      long bufferPresentationTimeUs,
      boolean isStarted) {
    // Note: Use of double rather than float is intentional for accuracy in the calculations below.
    double playbackSpeed = getPlaybackSpeed();
    //计算比当前播放的时间提前了多久,换句话说就是当前帧在真实需要渲染时间前提前了多久开始渲染。负值说明已经画面延迟了,我们在需要的时间并没有提供相应的渲染数据。
    long earlyUs = (long) ((bufferPresentationTimeUs - positionUs) / playbackSpeed);
    if (isStarted) {
      // 这里计算减去程序执行到这里所用的耗时
      earlyUs -= elapsedRealtimeNowUs - elapsedRealtimeUs;
    }

    return earlyUs;
  }
  
  @RequiresApi(21)
  protected void renderOutputBufferV21(
      MediaCodecAdapter codec, int index, long presentationTimeUs, long releaseTimeNs) {
    TraceUtil.beginSection("releaseOutputBuffer");
    codec.releaseOutputBuffer(index, releaseTimeNs);//这里传入了送显时间,由底层控制送显时间
    TraceUtil.endSection();
    decoderCounters.renderedOutputBufferCount++;
    consecutiveDroppedFrameCount = 0;
    if (!videoFrameProcessorManager.isEnabled()) {
      lastRenderRealtimeUs = SystemClock.elapsedRealtime() * 1000;
      maybeNotifyVideoSizeChanged(decodedVideoSize);
      maybeNotifyRenderedFirstFrame();
    }
  }
  protected void renderOutputBuffer(MediaCodecAdapter codec, int index, long presentationTimeUs) {
    TraceUtil.beginSection("releaseOutputBuffer");
    codec.releaseOutputBuffer(index, true);//低于21的系统,这里直接就送显了,true表示会渲染到Codec指定的Surface上
    TraceUtil.endSection();
    decoderCounters.renderedOutputBufferCount++;
    consecutiveDroppedFrameCount = 0;
    if (!videoFrameProcessorManager.isEnabled()) {
      lastRenderRealtimeUs = SystemClock.elapsedRealtime() * 1000;
      maybeNotifyVideoSizeChanged(decodedVideoSize);
      maybeNotifyRenderedFirstFrame();
    }
  }
//看下VideoFrameReleaseHelper的进一步调整精确送显时间戳的过程
  public long adjustReleaseTime(long releaseTimeNs) {
    // Until we know better, the adjustment will be a no-op.
    long adjustedReleaseTimeNs = releaseTimeNs;
    //同步状态下执行,所谓Synced指获取到连续的15个帧间隔时间小于1ms的帧
    if (lastAdjustedFrameIndex != C.INDEX_UNSET && frameRateEstimator.isSynced()) {
      //用这些帧的总时常/帧数=平局的帧间隔时长
      long frameDurationNs = frameRateEstimator.getFrameDurationNs();
      long candidateAdjustedReleaseTimeNs =
          lastAdjustedReleaseTimeNs//预测当前帧送显时间=上次帧的送显时间+当前帧到上次帧的帧数*帧间间隔时长/播放速度
              + (long) ((frameDurationNs * (frameIndex - lastAdjustedFrameIndex)) / playbackSpeed);
      //如果当前送显时间和预测的送显时间相隔时长小等于20ms,则使用预测的送显时间
      //这里20ms主要是考虑Android VSYNC机制,送显的数据不是立即显示到屏幕上,而是经过3级的缓存,在接收到VSYNC信号时才会显示到屏幕上,也就是你期望的送显时间并不是实际的送显时间
      if (adjustmentAllowed(releaseTimeNs, candidateAdjustedReleaseTimeNs)) {
        adjustedReleaseTimeNs = candidateAdjustedReleaseTimeNs;
      } else {
        resetAdjustment();
      }
    }
    pendingLastAdjustedFrameIndex = frameIndex;
    pendingLastAdjustedReleaseTimeNs = adjustedReleaseTimeNs;
    //下面是基于Vsync信号时间戳来调整送显时间戳,保证帧数据尽快显示到屏幕上
    if (vsyncSampler == null || vsyncDurationNs == C.TIME_UNSET) {
      return adjustedReleaseTimeNs;
    }
    //获取当前的Vsync信号时间戳
    long sampledVsyncTimeNs = vsyncSampler.sampledVsyncTimeNs;
    if (sampledVsyncTimeNs == C.TIME_UNSET) {
      return adjustedReleaseTimeNs;
    }
    // 寻找距离当前送显时间戳最近的目标Vsync信号时间戳
    long snappedTimeNs = closestVsync(adjustedReleaseTimeNs, sampledVsyncTimeNs, vsyncDurationNs);
    // 减去一个vsyncOffsetNs,保证送显时间在前一个Vsync信号时间戳前,目标Vsync信号时间戳之后
    //这个vsyncOffsetNs计算方式:
    //1.获取当前屏幕的刷新率,如60Hz就是屏幕每秒刷新60帧
    //2.计算每帧间隔时长,60Hz每帧间隔就是1/60秒,也就就是16.6ms
    //3.用这个间隔X0.8,vsyncOffsetNs=16.6ms*0.8=13.28ms
    return snappedTimeNs - vsyncOffsetNs;
  }

可以看到MediaCodecVideoRenderer参考positionUs时间,和当前流的PTS进行时间同步,保证同步。貌似目前还看不出和MediaCodecAudioRenderer 音频PTS的关系,但可以肯定视频的PTS是参考其他时间进行同步的,为了达到同步ExoPlayer用了大量的代码,还考虑了程序的执行时间,以纳秒级的计算,尽量缩小了误差,在极端情况下还会直接通过丢帧的方式保证同步(这也就是有时候播放的文件解码压力比较大时,视频会一卡一卡但是音频还是流畅播放的原因,可以思考下为什么这么做,反过来行不行)。

参考时间戳的计算

MediaCodecVideoRenderer的参考positionUs在有音轨的情况下,是通过MediaCodecAudioRenderer获取的,MediaCodecAudioRenderer又通过调用DefaultAudiaSkink,最终调用AudiaTrack的方法获取时间戳,下面我们来看下具体获取的过程

java 复制代码
  private void updateCurrentPosition() {
    long newCurrentPositionUs = audioSink.getCurrentPositionUs(isEnded());
    if (newCurrentPositionUs != AudioSink.CURRENT_POSITION_NOT_SET) {
      currentPositionUs =
          allowPositionDiscontinuity
              ? newCurrentPositionUs
              : max(currentPositionUs, newCurrentPositionUs);
      allowPositionDiscontinuity = false;
    }
  }
  //DefaultAudioSink
  @Override
  public long getCurrentPositionUs(boolean sourceEnded) {
    if (!isAudioTrackInitialized() || startMediaTimeUsNeedsInit) {
      return CURRENT_POSITION_NOT_SET;
    }
    //主要从这里获取
    long positionUs = audioTrackPositionTracker.getCurrentPositionUs(sourceEnded);
    //和通过帧数获取的时长位置取最小值
    positionUs = min(positionUs, configuration.framesToDurationUs(getWrittenFrames()));
    return applySkipping(applyMediaPositionParameters(positionUs));
  }
  public long getCurrentPositionUs(boolean sourceEnded) {
    if (checkNotNull(this.audioTrack).getPlayState() == PLAYSTATE_PLAYING) {
    //如果已经开始播放从AudiaTrack中同步出下面逻辑需要使用的数据,以及获取smoothedPlayheadOffsetUs,对getPlaybackHeadPositionUs做一个平滑处理
      maybeSampleSyncParams();
    }

    long systemTimeUs = System.nanoTime() / 1000;
    long positionUs;
    AudioTimestampPoller audioTimestampPoller = checkNotNull(this.audioTimestampPoller);
    boolean useGetTimestampMode = audioTimestampPoller.hasAdvancingTimestamp();
    if (useGetTimestampMode) {//如果支持AudioTrack.getTimestamp优先使用
      // Calculate the speed-adjusted position using the timestamp (which may be in the future).
      long timestampPositionFrames = audioTimestampPoller.getTimestampPositionFrames();//获取当前帧数
      long timestampPositionUs = framesToDurationUs(timestampPositionFrames);//帧数转为时长
      long elapsedSinceTimestampUs = systemTimeUs - audioTimestampPoller.getTimestampSystemTimeUs();
      elapsedSinceTimestampUs =//计算和当前时间的差值
          Util.getMediaDurationForPlayoutDuration(elapsedSinceTimestampUs, audioTrackPlaybackSpeed);
      positionUs = timestampPositionUs + elapsedSinceTimestampUs;//当前的帧时长+当前时间的差值=当前位置
    } else {//否则使用getPlaybackHeadPositionUs的值
      if (playheadOffsetCount == 0) {
        // 刚开始播放,没有足够多的数据计算平滑差值,直接取getPlaybackHeadPositionUs
        positionUs = getPlaybackHeadPositionUs();
      } else {
        // getPlaybackHeadPositionUs() only has a granularity of ~20 ms, so we base the position off
        // the system clock (and a smoothed offset between it and the playhead position) so as to
        // prevent jitter in the reported positions.
        //AudiaTrack.getPlaybackHeadPositionUs获取的精度只有20ms,所以需要和当前时间的差值求一个平滑差值,防止获取的getPlaybackHeadPositionUs有抖动
        positionUs =
            Util.getMediaDurationForPlayoutDuration(
                systemTimeUs + smoothedPlayheadOffsetUs, audioTrackPlaybackSpeed);
      }
      if (!sourceEnded) {
      //最终的位置还需要减去一个底层的延迟
        positionUs = max(0, positionUs - latencyUs);
      }
    }

    if (lastSampleUsedGetTimestampMode != useGetTimestampMode) {
      // 2次获取当前位置的方式不一样,保存上一次的值
      previousModeSystemTimeUs = lastSystemTimeUs;
      previousModePositionUs = lastPositionUs;
    }
    long elapsedSincePreviousModeUs = systemTimeUs - previousModeSystemTimeUs;
    //模式切换且和当前时间相差1s以内,在1s内对上次的位置到当前时间做一个平滑过渡,防止模式切换导致的跳动
    if (elapsedSincePreviousModeUs < MODE_SWITCH_SMOOTHING_DURATION_US) {
      long previousModeProjectedPositionUs =
          previousModePositionUs
              + Util.getMediaDurationForPlayoutDuration(
                  elapsedSincePreviousModeUs, audioTrackPlaybackSpeed);
      // 1s内取样1000次平滑过渡到当前时间
      long rampPoint = (elapsedSincePreviousModeUs * 1000) / MODE_SWITCH_SMOOTHING_DURATION_US;
      positionUs *= rampPoint;
      positionUs += (1000 - rampPoint) * previousModeProjectedPositionUs;
      positionUs /= 1000;
    }
    //需要监听位置首次增加的场景
    if (!notifiedPositionIncreasing && positionUs > lastPositionUs) {
      notifiedPositionIncreasing = true;
      long mediaDurationSinceLastPositionUs = Util.usToMs(positionUs - lastPositionUs);
      long playoutDurationSinceLastPositionUs =
          Util.getPlayoutDurationForMediaDuration(
              mediaDurationSinceLastPositionUs, audioTrackPlaybackSpeed);
      long playoutStartSystemTimeMs =//获取开始时间
          System.currentTimeMillis() - Util.usToMs(playoutDurationSinceLastPositionUs);
      listener.onPositionAdvancing(playoutStartSystemTimeMs);
    }

    lastSystemTimeUs = systemTimeUs;
    lastPositionUs = positionUs;
    lastSampleUsedGetTimestampMode = useGetTimestampMode;

    return positionUs;
  }
  
  private void maybeSampleSyncParams() {
    //获取当前时间
    long systemTimeUs = System.nanoTime() / 1000;
    //保证间隔30ms调用
    if (systemTimeUs - lastPlayheadSampleTimeUs >= MIN_PLAYHEAD_OFFSET_SAMPLE_INTERVAL_US) {
      //通过AudiaTrack.getPlaybackHeadPosition获取当前播放位置
      long playbackPositionUs = getPlaybackHeadPositionUs();
      if (playbackPositionUs == 0) {
        // 音频可能还未播放
        return;
      }
      // 最多取前10次playbackPositionUs 和当前时间的差值,求出平均差值,对playbackPositionUs 做一个平滑处理
      playheadOffsets[nextPlayheadOffsetIndex] =//获取10次的差值存储
          Util.getPlayoutDurationForMediaDuration(playbackPositionUs, audioTrackPlaybackSpeed)
              - systemTimeUs;
      //每10次一个循环
      nextPlayheadOffsetIndex = (nextPlayheadOffsetIndex + 1) % MAX_PLAYHEAD_OFFSET_COUNT;
      if (playheadOffsetCount < MAX_PLAYHEAD_OFFSET_COUNT) {
        playheadOffsetCount++;
      }
      lastPlayheadSampleTimeUs = systemTimeUs;
      smoothedPlayheadOffsetUs = 0;
      //获取前几次差值的平均值,获得平滑的差值,后续通过当前时间+这个值就可以计算出当前的PlaybackHeadPosition
      for (int i = 0; i < playheadOffsetCount; i++) {
        smoothedPlayheadOffsetUs += playheadOffsets[i] / playheadOffsetCount;
      }
    }

    if (needsPassthroughWorkarounds) {
      //对于API 21/22的AC-3直出音轨,后续获取的timestamp和latency 都是错误的值,这里直接跳过
      return;
    }
    //audioTrack.getTimestamp获取timestamp
    maybePollAndCheckTimestamp(systemTimeUs);
    //audioTrack.getLatency获取底层的延迟
    maybeUpdateLatency(systemTimeUs);
  }
  private long getPlaybackHeadPositionUs() {
    return framesToDurationUs(getPlaybackHeadPosition());
  }
  private long getPlaybackHeadPosition() {
  //获取当前时间
    long currentTimeMs = SystemClock.elapsedRealtime();
    if (stopTimestampUs != C.TIME_UNSET) {//已经停止
      // Simulate the playback head position up to the total number of frames submitted.
      //获取当前到结束位置的时长
      long elapsedTimeSinceStopUs = (currentTimeMs * 1000) - stopTimestampUs;
      //根据播放速度纠正时长
      long mediaTimeSinceStopUs =
          Util.getMediaDurationForPlayoutDuration(elapsedTimeSinceStopUs, audioTrackPlaybackSpeed);
          //时长转帧数
      long framesSinceStop = durationUsToFrames(mediaTimeSinceStopUs);
      //结束位置获取的总帧数+结束位置到现在的帧数=现在的总帧数,再和结束位置以写入的总帧数取最小值
      return min(endPlaybackHeadPosition, stopPlaybackHeadPosition + framesSinceStop);
    }
    //正常情况走下面逻辑,保证间隔5ms调用一次
    if (currentTimeMs - lastRawPlaybackHeadPositionSampleTimeMs
        >= RAW_PLAYBACK_HEAD_POSITION_UPDATE_INTERVAL_MS) {
      updateRawPlaybackHeadPosition(currentTimeMs);
      lastRawPlaybackHeadPositionSampleTimeMs = currentTimeMs;
    }
    return rawPlaybackHeadPosition + (rawPlaybackHeadWrapCount << 32);
  }
  private void updateRawPlaybackHeadPosition(long currentTimeMs) {
    AudioTrack audioTrack = checkNotNull(this.audioTrack);
    int state = audioTrack.getPlayState();
    if (state == PLAYSTATE_STOPPED) {
      // The audio track hasn't been started. Keep initial zero timestamp.
      return;
    }
    //最终调用audioTrack.getPlaybackHeadPosition获取时长,获取的为底层的无符号整型,java中通过有符号的long来表示
    long rawPlaybackHeadPosition = 0xFFFFFFFFL & audioTrack.getPlaybackHeadPosition();
    if (needsPassthroughWorkarounds) {
      //这块是一个兼容处理,对于API 21/22的直出音轨,在暂停时获取到的值可能为0
      if (state == PLAYSTATE_PAUSED && rawPlaybackHeadPosition == 0) {
        //保存为0时的位置
        passthroughWorkaroundPauseOffset = this.rawPlaybackHeadPosition;
      }
      //这里进行恢复
      rawPlaybackHeadPosition += passthroughWorkaroundPauseOffset;
    }

    if (Util.SDK_INT <= 29) {
      if (rawPlaybackHeadPosition == 0
          && this.rawPlaybackHeadPosition > 0
          && state == PLAYSTATE_PLAYING) {
        //这段也是一个兼容处理,当API<=29使用蓝牙设备播放时,连接蓝牙失败时,底层的状态已经停止,但JAVA层的状态还是正在播放
        //当这种情况发生时获取位置为0
        if (forceResetWorkaroundTimeMs == C.TIME_UNSET) {
        //通过设置这个标记来告诉当前处于错误状态,当超过200ms后还是有问题,会尝试重新初始化
          forceResetWorkaroundTimeMs = currentTimeMs;
        }
        return;
      } else {
        forceResetWorkaroundTimeMs = C.TIME_UNSET;
      }
    }

    if (this.rawPlaybackHeadPosition > rawPlaybackHeadPosition) {
      // The value must have wrapped around.
      rawPlaybackHeadWrapCount++;
    }
    this.rawPlaybackHeadPosition = rawPlaybackHeadPosition;
  }
  //audioTrack.getTimestamp获取timestamp
  private void maybePollAndCheckTimestamp(long systemTimeUs) {
    //audioTrack.getTimestamp不能平凡调用,AudioTimestampPoller 是一个Audio Timestamp的轮询获取器,稳定后控制调用者以10s的间隔去获取Timestamp
    AudioTimestampPoller audioTimestampPoller = checkNotNull(this.audioTimestampPoller);
    //是否获取到新的Timestamp,条件是API必须大于等于19以支持这个函数,且符合指定的时间间隔
    if (!audioTimestampPoller.maybePollTimestamp(systemTimeUs)) {
      return;
    }

    // 检验获取的Timestamp
    long audioTimestampSystemTimeUs = audioTimestampPoller.getTimestampSystemTimeUs();
    long audioTimestampPositionFrames = audioTimestampPoller.getTimestampPositionFrames();
    long playbackPositionUs = getPlaybackHeadPositionUs();
    //不能和系统时间相差太大,>5s
    if (Math.abs(audioTimestampSystemTimeUs - systemTimeUs) > MAX_AUDIO_TIMESTAMP_OFFSET_US) {
      listener.onSystemTimeUsMismatch(
          audioTimestampPositionFrames,
          audioTimestampSystemTimeUs,
          systemTimeUs,
          playbackPositionUs);
      audioTimestampPoller.rejectTimestamp();
      //不能和getPlaybackHeadPositionUs方法获取的值相差太大,>5s
    } else if (Math.abs(framesToDurationUs(audioTimestampPositionFrames) - playbackPositionUs)
        > MAX_AUDIO_TIMESTAMP_OFFSET_US) {
      listener.onPositionFramesMismatch(
          audioTimestampPositionFrames,
          audioTimestampSystemTimeUs,
          systemTimeUs,
          playbackPositionUs);
      audioTimestampPoller.rejectTimestamp();
    } else {
      audioTimestampPoller.acceptTimestamp();
    }
  }
  //audioTrack.getTimestamp.getLatency获取底层的延迟
  private void maybeUpdateLatency(long systemTimeUs) {
    if (isOutputPcm//线性 PCM 编码
        && getLatencyMethod != null//AudiaTreck存在getLatency方法
        && systemTimeUs - lastLatencySampleTimeUs >= MIN_LATENCY_SAMPLE_INTERVAL_US) {//调用间隔大于50ms
      try {
        //获取底层的延迟,-bufferSizeUs,排除缓冲区造成的延迟(留下混音器和音频硬件驱动程序造成的延迟)
        latencyUs =
            castNonNull((Integer) getLatencyMethod.invoke(checkNotNull(audioTrack))) * 1000L
                - bufferSizeUs;
        // Check that the latency is non-negative.
        latencyUs = max(latencyUs, 0);
        // Check that the latency isn't too large.
        if (latencyUs > MAX_LATENCY_US) {
          listener.onInvalidLatency(latencyUs);
          latencyUs = 0;
        }
      } catch (Exception e) {
        // The method existed, but doesn't work. Don't try again.
        getLatencyMethod = null;
      }
      lastLatencySampleTimeUs = systemTimeUs;
    }
  }

Exoplayer 使用了2种方式来获取音轨的当前位置时间戳,在API19及以上,优先使用AudioTrack.getTimestamp来获取位置时间戳,否则采用AudiaTrack.getPlaybackHeadPosition来获取,由于getPlaybackHeadPosition精度较低还会采用一个平滑算法,计算出一个平均值来优化getPlaybackHeadPosition的精度。

总结

Renderer作为一个重要的组件,相比MediaSource的讲解可能比较简略,一方面因为Renderer的整体结构比MediaSource简单,没有分太多层,代码也比较集中。但这并不意味着不重要,这些代码值得仔细研究,其实这短短代码中蕴含着开发人员无数次的尝试调优,以及针对线上遇到问题的巧妙解决方案,有些方案可能在我看来比较无奈当又比不可少。另一方面,Renderer底层将解析工作交给了Android的系统组件,如果想要追根溯源那又是另一个系列了。还有原因就是不能再写了网站的在线编辑器到这里每打一个字都要卡很久哈哈。


版权声明 ©

本文为作者山雨楼原创文章

转载请注明出处

原创不易,觉得有用的话,收藏转发点赞支持

相关推荐
深圳智物通讯1 小时前
安卓开发板_联发科MTK开发板使用ADB开发
android·adb·安卓开发板
居安思危_Ho2 小时前
【Android笔记】Android Studio打包 提示Invalid keystore format
android·笔记·gradle·android studio·android 打包
lareina_yy3 小时前
Android Studio新建工程(Java语言环境)
android·java·android studio
薛文旺5 小时前
Android MediaProjection录屏权限处理
android
jim_dayday_up5 小时前
android BLE 蓝牙的连接(二)
android
Clank的游戏栈5 小时前
Unity3D Android多渠道极速打包方案详解
android
向晚流年5 小时前
Android Graphics 显示系统 - 图层的生命周期 Layer Lifecycle
android
程序喵D6 小时前
MapBox Android版开发 5 示例清单
android·mapbox
偶是老李头6 小时前
Android - NDK:在Jni中打印Log信息
android·jni·android ndk log·jni log
LittleLoveBoy6 小时前
Android前台服务如何在后台启动activity?
android