【WebRTC】视频编码链路中各个类的简单分析——VideoEncoder

目录

1.视频编码器(VideoEncoder)

VideoEncoder是执行具体编码任务的上层控制器,这个类可以被其他具体编码器的类,如VP8,VP9,H264和AV1继承。这个类当中主要的功能包括:

(1)设置编码器信息(名称,handle,软编或硬编,SVC)

(2)初始化编码器,注册编码结束后的回调对象,释放编码器,执行编码

(3)根据QP阈值调控质量

(4)码控参数(目标码率,码率,FPS)

(5)码率调控(分辨率)

(6)丢弃提示器(上一帧是否可解,上一帧时间戳等)

(7)前向纠错控制(FecControl)

(8)编码状态回调(丢包率变化,RTT时间变化,丢包等)

在这所有的功能当中,其中最核心的是执行编码(Encode())和编码结束之后的回调(RegisterEncodeCompleteCallback),Encode()实现一帧的实际编码,RegisterEncodeCompleteCallback当中给出了在实际编码完成之后,将编码后的信息送入到其他模块的实现(其中主要是将码流发送到其他接收端)。

cpp 复制代码
class RTC_EXPORT VideoEncoder {
 public:
  // QP的阈值
  struct QpThresholds {
    QpThresholds(int l, int h) : low(l), high(h) {}
    QpThresholds() : low(-1), high(-1) {}
    int low;
    int high;
  };

  // Quality scaling is enabled if thresholds are provided.
  // 如果提供了阈值,那么将启用质量缩放
  struct RTC_EXPORT ScalingSettings {
   private:
    // Private magic type for kOff, implicitly convertible to
    // ScalingSettings.
    // 私有的神奇类型,用于kOff,可以隐式转换为ScalingSettings
    struct KOff {};

   public:
    // TODO(bugs.webrtc.org/9078): Since std::optional should be trivially copy
    // constructible, this magic value can likely be replaced by a constexpr
    // ScalingSettings value.
    static constexpr KOff kOff = {};

    ScalingSettings(int low, int high);
    ScalingSettings(int low, int high, int min_pixels);
    ScalingSettings(const ScalingSettings&);
    ScalingSettings(KOff);  // NOLINT(runtime/explicit)
    ~ScalingSettings();

    std::optional<QpThresholds> thresholds;

    // We will never ask for a resolution lower than this.
    // TODO(kthelgason): Lower this limit when better testing
    // on MediaCodec and fallback implementations are in place.
    // See https://bugs.chromium.org/p/webrtc/issues/detail?id=7206
    int min_pixels_per_frame = kDefaultMinPixelsPerFrame;

   private:
    // Private constructor; to get an object without thresholds, use
    // the magic constant ScalingSettings::kOff.
    ScalingSettings();
  };

  // Bitrate limits for resolution.
  // 面向分辨率的码率限制
  struct ResolutionBitrateLimits {
    ResolutionBitrateLimits(int frame_size_pixels,
                            int min_start_bitrate_bps,
                            int min_bitrate_bps,
                            int max_bitrate_bps)
        : frame_size_pixels(frame_size_pixels),
          min_start_bitrate_bps(min_start_bitrate_bps),
          min_bitrate_bps(min_bitrate_bps),
          max_bitrate_bps(max_bitrate_bps) {}
    // Size of video frame, in pixels, the bitrate thresholds are intended for.
    // 视频帧的大小,以像素为单位,所提供的码率阈值是针对此大小的
    int frame_size_pixels = 0;
    // Recommended minimum bitrate to start encoding.
    // 建议的最小编码起始码率
    int min_start_bitrate_bps = 0;
    // Recommended minimum bitrate.
    // 建议的最小码率
    int min_bitrate_bps = 0;
    // Recommended maximum bitrate.
    // 建议的最大码率
    int max_bitrate_bps = 0;

    bool operator==(const ResolutionBitrateLimits& rhs) const;
    bool operator!=(const ResolutionBitrateLimits& rhs) const {
      return !(*this == rhs);
    }
  };

  // Struct containing metadata about the encoder implementing this interface.
  // 包含实现此接口的编码器的元数据的结构体
  struct RTC_EXPORT EncoderInfo {
    static constexpr uint8_t kMaxFramerateFraction =
        std::numeric_limits<uint8_t>::max();

    EncoderInfo();
    EncoderInfo(const EncoderInfo&);

    ~EncoderInfo();

    std::string ToString() const;
    bool operator==(const EncoderInfo& rhs) const;
    bool operator!=(const EncoderInfo& rhs) const { return !(*this == rhs); }

    // Any encoder implementation wishing to use the WebRTC provided
    // quality scaler must populate this field.
    // 任何希望使用WebRTC提供的质量缩放器的编码器实现都必须填写这个字段
    ScalingSettings scaling_settings;

    // The width and height of the incoming video frames should be divisible
    // by `requested_resolution_alignment`. If they are not, the encoder may
    // drop the incoming frame.
    // For example: With I420, this value would be a multiple of 2.
    // Note that this field is unrelated to any horizontal or vertical stride
    // requirements the encoder has on the incoming video frame buffers.
    // 传入视频帧的宽度和高度应该能够被`requested_resolution_alignment`整除。如果不行,编码器可能会丢弃传入的帧。
    // 例如:对于I420格式,这个值会是2的倍数。
    // 请注意,这个字段与编码器对传入视频帧缓冲区的任何水平或垂直跨度要求无关。
    uint32_t requested_resolution_alignment;

    // Same as above but if true, each simulcast layer should also be divisible
    // by `requested_resolution_alignment`.
    // Note that scale factors `scale_resolution_down_by` may be adjusted so a
    // common multiple is not too large to avoid largely cropped frames and
    // possibly with an aspect ratio far from the original.
    // Warning: large values of scale_resolution_down_by could be changed
    // considerably, especially if `requested_resolution_alignment` is large.
    // 与上述相同,但如果设置为true,则每个Simulcast层的分辨率也应该能够被
    // `requested_resolution_alignment`整除。
    
    // 注意,缩放因子`scale_resolution_down_by`可能会被调整,以确保一个公共倍数不会太大,
    // 从而避免帧被大幅度裁剪,并且可能与原始宽高比相差甚远。
    
    // 警告:如果`requested_resolution_alignment`的值较大,
    // `scale_resolution_down_by`的值可能会有相当大的变化。
    bool apply_alignment_to_all_simulcast_layers;

    // If true, encoder supports working with a native handle (e.g. texture
    // handle for hw codecs) rather than requiring a raw I420 buffer.
    // 如果为真,编码器支持使用原生句柄(例如硬件编解码器的纹理句柄)而不是要求使用原始的I420缓冲区
    bool supports_native_handle;

    // The name of this particular encoder implementation, e.g. "libvpx".
    // 这个特定编码器实现的名称,例如 "libvpx"
    std::string implementation_name;

    // If this field is true, the encoder rate controller must perform
    // well even in difficult situations, and produce close to the specified
    // target bitrate seen over a reasonable time window, drop frames if
    // necessary in order to keep the rate correct, and react quickly to
    // changing bitrate targets. If this method returns true, we disable the
    // frame dropper in the media optimization module and rely entirely on the
    // encoder to produce media at a bitrate that closely matches the target.
    // Any overshooting may result in delay buildup. If this method returns
    // false (default behavior), the media opt frame dropper will drop input
    // frames if it suspect encoder misbehavior. Misbehavior is common,
    // especially in hardware codecs. Disable media opt at your own risk.
    /*
		如果这个字段为真,编码器的速率控制器即使在困难的情况下也必须表现良好,
		并且在合理的时间窗口内产生接近指定目标比特率的数据,如有必要,通过丢
		帧来保持正确的速率,并快速响应比特率目标的变化。如果这个方法返回真,
		我们将禁用媒体优化模块中的帧丢弃器,并完全依赖编码器产生与目标比特率
		非常接近的媒体数据。任何超调都可能导致延迟累积。如果这个方法返回假(默认行为),
		如果媒体优化帧丢弃器怀疑编码器行为不当,它将丢弃输入帧。不当行为很常见,
		特别是在硬件编解码器中。禁用媒体优化请自行承担风险。
    */
    bool has_trusted_rate_controller;

    // If this field is true, the encoder uses hardware support and different
    // thresholds will be used in CPU adaptation.
    // 如果这个字段为真,编码器将使用硬件支持,并且在CPU适应性调整中会使用不同的阈值
    bool is_hardware_accelerated;

    // For each spatial layer (simulcast stream or SVC layer), represented as an
    // element in `fps_allocation` a vector indicates how many temporal layers
    // the encoder is using for that spatial layer.
    // For each spatial/temporal layer pair, the frame rate fraction is given as
    // an 8bit unsigned integer where 0 = 0% and 255 = 100%.
    //
    // If the vector is empty for a given spatial layer, it indicates that frame
    // rates are not defined and we can't count on any specific frame rate to be
    // generated. Likely this indicates Vp8TemporalLayersType::kBitrateDynamic.
    //
    // The encoder may update this on a per-frame basis in response to both
    // internal and external signals.
    //
    // Spatial layers are treated independently, but temporal layers are
    // cumulative. For instance, if:
    //   fps_allocation[0][0] = kMaxFramerateFraction / 2;
    //   fps_allocation[0][1] = kMaxFramerateFraction;
    // Then half of the frames are in the base layer and half is in TL1, but
    // since TL1 is assumed to depend on the base layer, the frame rate is
    // indicated as the full 100% for the top layer.
    //
    // Defaults to a single spatial layer containing a single temporal layer
    // with a 100% frame rate fraction.
    /*
		对于每个空间层(Simulcast流或SVC层),在`fps_allocation`中作为一个元素表示,
		一个向量指示编码器为该空间层使用了多少时间层。
		
		对于每个空间/时间层对,帧率分数以8位无符号整数给出,其中0 = 0%,255 = 100%。

		如果给定空间层的向量为空,则表示帧率未定义,我们不能指望生成任何特定帧率。这很可能表示
		Vp8TemporalLayersType::kBitrateDynamic。

		编码器可能会根据内部和外部信号,逐帧更新此设置。
		
		空间层是独立处理的,但时间层是累积的。例如,如果:
		fps_allocation[0][0] = kMaxFramerateFraction / 2;
		fps_allocation[0][1] = kMaxFramerateFraction;
		
		那么一半的帧在基础层,一半在TL1,但由于假设TL1依赖于基础层,帧率表示为顶层的100%。

		默认设置为单个空间层包含单个时间层,帧率分数为100%。
	*/
    absl::InlinedVector<uint8_t, kMaxTemporalStreams>
        fps_allocation[kMaxSpatialLayers];

    // Recommended bitrate limits for different resolutions.
    // 不同分辨率推荐的比特率限制
    std::vector<ResolutionBitrateLimits> resolution_bitrate_limits;

    // Obtains the limits from `resolution_bitrate_limits` that best matches the
    // `frame_size_pixels`.
    // 从`resolution_bitrate_limits`中获取与`frame_size_pixels`最匹配的限制
    std::optional<ResolutionBitrateLimits> GetEncoderBitrateLimitsForResolution(
        int frame_size_pixels) const;

    // If true, this encoder has internal support for generating simulcast
    // streams. Otherwise, an adapter class will be needed.
    // Even if true, the config provided to InitEncode() might not be supported,
    // in such case the encoder should return
    // WEBRTC_VIDEO_CODEC_ERR_SIMULCAST_PARAMETERS_NOT_SUPPORTED.
    /*
		如果为真,这个编码器内部支持生成Simulcast流。否则,将需要一个适配器类。
		即使为真,提供给InitEncode()的配置可能不支持,在这种情况下,编码器应该返回
		WEBRTC_VIDEO_CODEC_ERR_SIMULCAST_PARAMETERS_NOT_SUPPORTED。
	*/
    bool supports_simulcast;

    // The list of pixel formats preferred by the encoder. It is assumed that if
    // the list is empty and supports_native_handle is false, then {I420} is the
    // preferred pixel format. The order of the formats does not matter.
    /*
		编码器首选的像素格式列表。如果列表为空且supports_native_handle为false,
		则假定{I420}是首选的像素格式。格式的顺序无关紧要。
	*/
    absl::InlinedVector<VideoFrameBuffer::Type, kMaxPreferredPixelFormats>
        preferred_pixel_formats;

    // Indicates whether or not QP value encoder writes into frame/slice/tile
    // header can be interpreted as average frame/slice/tile QP.
    /*
		指示编码器写入frame/slice/tile头部的QP值是否可以被解释为平均frame/slice/tile QP值
	*/
    std::optional<bool> is_qp_trusted;

    // The minimum QP that the encoder is expected to use with the current
    // configuration. This may be used to determine if the encoder has reached
    // its target video quality for static screenshare content.
    /*
		编码器在当前配置下预期使用的最小QP值。这可能被用来确定编码器是否已经达到了静态屏幕共享内容的目标视频质量
	*/
    std::optional<int> min_qp;
  };

  struct RTC_EXPORT RateControlParameters {
    RateControlParameters();
    RateControlParameters(const VideoBitrateAllocation& bitrate,
                          double framerate_fps);
    RateControlParameters(const VideoBitrateAllocation& bitrate,
                          double framerate_fps,
                          DataRate bandwidth_allocation);
    virtual ~RateControlParameters();

    // Target bitrate, per spatial/temporal layer.
    // A target bitrate of 0bps indicates a layer should not be encoded at all.
    // 目标比特率,每个空间/时间层。
	// 0bps的目标比特率表示该层根本不应该被编码。
    VideoBitrateAllocation target_bitrate;
    // Adjusted target bitrate, per spatial/temporal layer. May be lower or
    // higher than the target depending on encoder behaviour.
    // 调整后的目标比特率,每个空间/时间层。
	// 可能根据编码器的行为比目标比特率高或低。
    VideoBitrateAllocation bitrate;
    // Target framerate, in fps. A value <= 0.0 is invalid and should be
    // interpreted as framerate target not available. In this case the encoder
    // should fall back to the max framerate specified in `codec_settings` of
    // the last InitEncode() call.
    /*
		目标帧率,以每秒帧数(fps)表示。小于或等于0.0的值是无效的,应该被解释为目标帧率
		不可用。在这种情况下,编码器应该回退到最后一次`InitEncode()`调用中`codec_settings`指定的最大帧率。
	*/
    double framerate_fps;
    // The network bandwidth available for video. This is at least
    // `bitrate.get_sum_bps()`, but may be higher if the application is not
    // network constrained.
    /*
		可用于视频的网络带宽。这至少是`bitrate.get_sum_bps()`,但如果应用程序不受网络限制,可能会更高。
	*/
    DataRate bandwidth_allocation;

    bool operator==(const RateControlParameters& rhs) const;
    bool operator!=(const RateControlParameters& rhs) const;
  };

  struct LossNotification {
    // The timestamp of the last decodable frame *prior* to the last received.
    // (The last received - described below - might itself be decodable or not.)
    /*
		最后一个可解码帧的的时间戳是在最后一个接收到的帧之前的。
		(下面描述的最后一个接收到的帧本身可能是可解码的,也可能不是。)
	*/
    uint32_t timestamp_of_last_decodable;
    // The timestamp of the last received frame.
    // 接收到的上一帧的时间戳
    uint32_t timestamp_of_last_received;
    // Describes whether the dependencies of the last received frame were
    // all decodable.
    // `false` if some dependencies were undecodable, `true` if all dependencies
    // were decodable, and `nullopt` if the dependencies are unknown.
    /*
		描述最后一个接收到的帧的依赖项是否都可以被解码。如果有些依赖项不能被解码,则为`false`;
		如果所有依赖项都可以被解码,则为`true`;如果依赖项未知,则为`nullopt`。
	*/
    std::optional<bool> dependencies_of_last_received_decodable;
    // Describes whether the received frame was decodable.
    // `false` if some dependency was undecodable or if some packet belonging
    // to the last received frame was missed.
    // `true` if all dependencies were decodable and all packets belonging
    // to the last received frame were received.
    // `nullopt` if no packet belonging to the last frame was missed, but the
    // last packet in the frame was not yet received.
    /*
		描述接收到的帧是否可以被解码。
	    (1)如果有些依赖项不能被解码,或者属于最后一个接收到的帧的某个数据包丢失了,则为`false`。
		(2)如果所有依赖项都可以被解码,并且属于最后一个接收到的帧的所有数据包都已接收,则为`true`。
		(3) 如果属于最后一个帧的数据包没有丢失,但是帧中的最后一个数据包尚未接收到,则为`nullopt`。
	*/
    std::optional<bool> last_received_decodable;
  };

  // Negotiated capabilities which the VideoEncoder may expect the other
  // side to use.
  // VideoEncoder可能期望对方使用的协商能力
  struct Capabilities {
    explicit Capabilities(bool loss_notification)
        : loss_notification(loss_notification) {}
    bool loss_notification;
  };

  struct Settings {
    Settings(const Capabilities& capabilities,
             int number_of_cores,
             size_t max_payload_size)
        : capabilities(capabilities),
          number_of_cores(number_of_cores),
          max_payload_size(max_payload_size) {}

    Capabilities capabilities;
    int number_of_cores;
    size_t max_payload_size;
    // Experimental API - currently only supported by LibvpxVp8Encoder and
    // the OpenH264 encoder. If set, limits the number of encoder threads.
    // 实验性API - 目前仅由LibvpxVp8Encoder和OpenH264编码器支持。如果设置,将限制编码器线程的数量
    std::optional<int> encoder_thread_limit;
  };
  // 获取VP8默认配置
  static VideoCodecVP8 GetDefaultVp8Settings();
  // 获取VP9默认配置
  static VideoCodecVP9 GetDefaultVp9Settings();
  // 获取H264默认配置
  static VideoCodecH264 GetDefaultH264Settings();

  virtual ~VideoEncoder() {}

  // Set a FecControllerOverride, through which the encoder may override
  // decisions made by FecController.
  // 设置一个FecControllerOverride,通过它编码器可以覆盖FecController所做的决策
  // TODO(bugs.webrtc.org/10769): Update downstream, then make pure-virtual.
  virtual void SetFecControllerOverride(
      FecControllerOverride* fec_controller_override);

  // Initialize the encoder with the information from the codecSettings
  //
  // Input:
  //          - codec_settings    : Codec settings
  //          - settings          : Settings affecting the encoding itself.
  // Input for deprecated version:
  //          - number_of_cores   : Number of cores available for the encoder
  //          - max_payload_size  : The maximum size each payload is allowed
  //                                to have. Usually MTU - overhead.
  //
  // Return value                  : Set bit rate if OK
  //                                 <0 - Errors:
  //                                  WEBRTC_VIDEO_CODEC_ERR_PARAMETER
  //                                  WEBRTC_VIDEO_CODEC_ERR_SIZE
  //                                  WEBRTC_VIDEO_CODEC_MEMORY
  //                                  WEBRTC_VIDEO_CODEC_ERROR
  // TODO(bugs.webrtc.org/10720): After updating downstream projects and posting
  // an announcement to discuss-webrtc, remove the three-parameters variant
  // and make the two-parameters variant pure-virtual.
  // 初始化编码器
  /* ABSL_DEPRECATED("bugs.webrtc.org/10720") */ virtual int32_t InitEncode(
      const VideoCodec* codec_settings,
      int32_t number_of_cores,
      size_t max_payload_size);
  virtual int InitEncode(const VideoCodec* codec_settings,
                         const VideoEncoder::Settings& settings);

  // Register an encode complete callback object.
  //
  // Input:
  //          - callback         : Callback object which handles encoded images.
  //
  // Return value                : WEBRTC_VIDEO_CODEC_OK if OK, < 0 otherwise.
  // 注册一个编码结束之后回调的对象
  virtual int32_t RegisterEncodeCompleteCallback(
      EncodedImageCallback* callback) = 0;

  // Free encoder memory.
  // Return value                : WEBRTC_VIDEO_CODEC_OK if OK, < 0 otherwise.
  // 释放编码器
  virtual int32_t Release() = 0;

  // Encode an image (as a part of a video stream). The encoded image
  // will be returned to the user through the encode complete callback.
  //
  // Input:
  //          - frame             : Image to be encoded
  //          - frame_types       : Frame type to be generated by the encoder.
  //
  // Return value                 : WEBRTC_VIDEO_CODEC_OK if OK
  //                                <0 - Errors:
  //                                  WEBRTC_VIDEO_CODEC_ERR_PARAMETER
  //                                  WEBRTC_VIDEO_CODEC_MEMORY
  //                                  WEBRTC_VIDEO_CODEC_ERROR
  // 编码一张图片(作为视频流的一部分)。编码后的图片将通过编码完成回调返回给用户
  virtual int32_t Encode(const VideoFrame& frame,
                         const std::vector<VideoFrameType>* frame_types) = 0;

  // Sets rate control parameters: bitrate, framerate, etc. These settings are
  // instantaneous (i.e. not moving averages) and should apply from now until
  // the next call to SetRates().
  // 设置速率控制参数:比特率、帧率等。这些设置是即时的(即不是移动平均值),
  // 并且应该从现在起一直应用,直到下一次调用SetRates()。
  virtual void SetRates(const RateControlParameters& parameters) = 0;

  // Inform the encoder when the packet loss rate changes.
  //
  // Input:   - packet_loss_rate  : The packet loss rate (0.0 to 1.0).
  // 告知编码器丢包率发生了变化
  virtual void OnPacketLossRateUpdate(float packet_loss_rate);

  // Inform the encoder when the round trip time changes.
  //
  // Input:   - rtt_ms            : The new RTT, in milliseconds.
  // 当往返时间变化时通知编码器
  virtual void OnRttUpdate(int64_t rtt_ms);

  // Called when a loss notification is received.
  // 在收到丢包通知时被调用
  virtual void OnLossNotification(const LossNotification& loss_notification);

  // Returns meta-data about the encoder, such as implementation name.
  // The output of this method may change during runtime. For instance if a
  // hardware encoder fails, it may fall back to doing software encoding using
  // an implementation with different characteristics.
  /*
	返回关于编码器的元数据,比如实现名称。这个方法的输出在运行时可能会改变。
	例如,如果硬件编码器失败,它可能会回退到使用具有不同特性的软件编码实现。
  */
  virtual EncoderInfo GetEncoderInfo() const = 0;
};

通过查看modules/video_coding/codecs,可以知道在WebRTC中,支持的编码器包括H264,VP8,VP9和AV1,并且默认会使用VP8标准。在这几种编码标准中,VP8和VP9由谷歌自己研发提出,适配性比较好;H264使用的是OpenH264的库实现;AV1编码标准由AOMedia提出,是未来持续应用的方向之一。H264编码器的声明位于modules/video_coding/codecs/h264/h264_encoder_impl.h中

cpp 复制代码
class H264EncoderImpl : public VideoEncoder { // 继承自VideoEncoder
 public:
  struct LayerConfig {
    int simulcast_idx = 0;
    int width = -1;
    int height = -1;
    bool sending = true;
    bool key_frame_request = false;
    float max_frame_rate = 0;
    uint32_t target_bps = 0;
    uint32_t max_bps = 0;
    bool frame_dropping_on = false;
    int key_frame_interval = 0;
    int num_temporal_layers = 1;

    void SetStreamState(bool send_stream);
  };

  H264EncoderImpl(const Environment& env, H264EncoderSettings settings);

  ~H264EncoderImpl() override;

  // `settings.max_payload_size` is ignored.
  // The following members of `codec_settings` are used. The rest are ignored.
  // - codecType (must be kVideoCodecH264)
  // - targetBitrate
  // - maxFramerate
  // - width
  // - height
  // 初始化编码器
  int32_t InitEncode(const VideoCodec* codec_settings,
                     const VideoEncoder::Settings& settings) override;
  // 释放编码器
  int32_t Release() override;
  // 注册编码回调函数
  int32_t RegisterEncodeCompleteCallback(
      EncodedImageCallback* callback) override;
  void SetRates(const RateControlParameters& parameters) override;

  // The result of encoding - an EncodedImage and CodecSpecificInfo - are
  // passed to the encode complete callback.
  // 执行编码,编码后的结果会给传递到回调函数中
  int32_t Encode(const VideoFrame& frame,
                 const std::vector<VideoFrameType>* frame_types) override;
  // 获取编码器信息
  EncoderInfo GetEncoderInfo() const override;

  // Exposed for testing.
  H264PacketizationMode PacketizationModeForTesting() const {
    return packetization_mode_;
  }

 private:
  // 创建编码器参数
  SEncParamExt CreateEncoderParams(size_t i) const;

  webrtc::H264BitstreamParser h264_bitstream_parser_;
  // Reports statistics with histograms.
  void ReportInit();
  void ReportError();

  std::vector<ISVCEncoder*> encoders_;
  std::vector<SSourcePicture> pictures_;
  // 下采样的buffer
  std::vector<rtc::scoped_refptr<I420Buffer>> downscaled_buffers_;
  // 配置项
  std::vector<LayerConfig> configurations_;
  // 编码后的image
  std::vector<EncodedImage> encoded_images_;
  // SVC控制器
  std::vector<std::unique_ptr<ScalableVideoController>> svc_controllers_;
  absl::InlinedVector<std::optional<ScalabilityMode>, kMaxSimulcastStreams>
      scalability_modes_;

  const Environment env_;
  VideoCodec codec_;
  H264PacketizationMode packetization_mode_;
  size_t max_payload_size_;
  int32_t number_of_cores_;
  std::optional<int> encoder_thread_limit_;
  // 编码回调接口
  EncodedImageCallback* encoded_image_callback_;

  bool has_reported_init_;
  bool has_reported_error_;

  std::vector<uint8_t> tl0sync_limit_;
};

值得一提的是,H264编码器使用的是OpenH264,H264的解码器使用的是FFmpeg当中的解码器,这一点从modules/video_coding/codecs/h264/h264_decoder_impl.cc中可以知道

cpp 复制代码
int32_t H264DecoderImpl::Decode(const EncodedImage& input_image,
                                bool /*missing_frames*/,
                                int64_t /*render_time_ms*/) {
  // ...
  // 将packet送入到解码器当中解码,这是FFmpeg当中的解码API
  int result = avcodec_send_packet(av_context_.get(), packet.get());

  if (result < 0) {
    RTC_LOG(LS_ERROR) << "avcodec_send_packet error: " << result;
    ReportError();
    return WEBRTC_VIDEO_CODEC_ERROR;
  }
  // 获取解码之后的frame(FFmpeg API)
  result = avcodec_receive_frame(av_context_.get(), av_frame_.get());
  if (result < 0) {
    RTC_LOG(LS_ERROR) << "avcodec_receive_frame error: " << result;
    ReportError();
    return WEBRTC_VIDEO_CODEC_ERROR;
  }
  // ...
}
相关推荐
9527华安2 小时前
FPGA实现PCIE3.0视频采集转10G万兆UDP网络输出,基于XDMA+GTH架构,提供工程源码和技术支持
网络·fpga开发·udp·音视频·xdma·pcie3.0·万兆网
电子科技圈3 小时前
XMOS携手合作伙伴晓龙国际联合推出集成了ASRC等功能的多通道音频板
科技·嵌入式硬件·mcu·物联网·音视频·iot
码码哈哈0.03 小时前
免费的视频混剪综合处理工具介绍与下载
音视频
莫固执,朋友3 小时前
网络抓包工具tcpdump 在海思平台上的编译使用
网络·ffmpeg·音视频·tcpdump
深海呐4 小时前
Android 从本地选择视频,用APP播放或进行其他处理
android·音视频·从本地选择视频,用app播放·从本地选择视频,并拿到信息·跳转到本地视频列表
cuijiecheng20184 小时前
音视频入门基础:MPEG2-TS专题(6)——FFmpeg源码中,获取MPEG2-TS传输流每个transport packet长度的实现
ffmpeg·音视频
安静读书7 小时前
Java解析视频FPS(帧率)、分辨率信息
java·python·音视频
VisionX Lab9 小时前
数据脱敏工具:基于 FFmpeg 的视频批量裁剪
python·ffmpeg·音视频
EasyNTS15 小时前
RTSP播放器EasyPlayer.js播放器分辨率高的视频在设置container的宽高较小时,会出现锯齿状的画面效果
音视频
EasyCVR21 小时前
ISUP协议视频平台EasyCVR私有化视频平台新能源汽车充电停车管理方案的创新与实践
大数据·网络·汽车·音视频·h.265·h.264