VideoBlockTokenizer:视频色块语义token化器的设计与实现

引言

在数字视频处理领域,压缩技术一直是核心研究方向之一。传统视频编码标准(如H.264、HEVC)通过复杂的算法实现高压缩比,但其复杂度也相应较高。本文将介绍一种基于色块语义token化的轻量级视频压缩方案------VideoBlockTokenizer,它在图像色块token化的基础上增加了时间维度的压缩,实现了简洁而有效的视频表示。

方法概述

VideoBlockTokenizer基于以下几个核心思想:

  1. 共享调色板:为整个视频序列提取统一的颜色调色板
  2. 关键帧(Keyframe)编码:完整帧编码,复用图像tokenizer的全部语义
  3. 预测帧(P-frame)编码:帧间差分编码,只编码变化区域
  4. 运动估计(Motion Estimation):块级运动匹配,精确匹配前帧中的对应块

系统架构

1. 初始化与参数配置

python 复制代码
class VideoBlockTokenizer:
    def __init__(self, max_colors=64, keyframe_interval=24,
                 tile_sizes=None, mv_block_sizes=None, 
                 mv_search_range=16, mv_min_match=0.5):
        self.max_colors = max_colors
        self.keyframe_interval = keyframe_interval
        self.tile_sizes = tile_sizes or [2, 4, 8, 16]
        self.mv_block_sizes = mv_block_sizes if mv_block_sizes is not None else [8, 4]
        self.mv_search_range = mv_search_range
        self.mv_min_match = mv_min_match
        self._img_tok = ColorBlockTokenizer(
            max_colors=max_colors, tile_sizes=tile_sizes,
            use_gradients=False, use_delta=False)

2. 视频编码流程

编码过程主要分为以下几个步骤:

2.1 帧读取与预处理
python 复制代码
def _read_frames(self, video_path, max_frames, width, height):
    cap = cv2.VideoCapture(video_path)
    fps = cap.get(cv2.CAP_PROP_FPS)
    frames = []
    for _ in range(max_frames):
        ret, frame = cap.read()
        if not ret:
            break
        frame = cv2.cvtColor(frame, cv2.COLOR_BGR2RGB)
        frame = cv2.resize(frame, (width, height),
                           interpolation=cv2.INTER_AREA)
        frames.append(frame)
    cap.release()
    return frames, fps
2.2 共享调色板计算

从视频帧中采样像素,使用K-means聚类提取共享调色板:

python 复制代码
def _compute_shared_palette(self, frames):
    all_pixels = []
    step = max(1, len(frames) // 8)
    for i in range(0, len(frames), step):
        f = frames[i]
        pixels = f.reshape(-1, 3).astype(np.float32)
        idx = np.random.choice(len(pixels),
                               min(5000, len(pixels)), replace=False)
        all_pixels.append(pixels[idx])
    all_pixels = np.vstack(all_pixels)
    n_c = min(self.max_colors, len(all_pixels))
    km = KMeans(n_clusters=n_c, random_state=42, n_init=10)
    km.fit(all_pixels)
    centers = km.cluster_centers_.astype(np.uint8)
    # ... 去重处理
2.3 关键帧编码

关键帧使用完整的图像tokenizer进行编码:

python 复制代码
def _encode_keyframe(self, q, h, w, palette, tokens):
    rgb = self._to_rgb(q, palette)
    kf_tokens, _, _ = self._img_tok.encode(rgb, palette=palette)
    tokens.extend(kf_tokens[2:])
2.4 预测帧编码

预测帧编码包含运动估计和剩余区域编码:

python 复制代码
def _encode_pframe(self, curr_q, prev_q, kf_q, h, w, tokens):
    covered = (curr_q == prev_q)  # 标记未变化区域
    
    # 运动估计
    mv_tokens = self._motion_estimate(
        curr_q, [prev_q, kf_q], covered, h, w)
    tokens.extend(mv_tokens)
    
    # 编码剩余未覆盖区域
    self._img_tok._encode_remaining(curr_q, covered, h, w, tokens)
2.5 运动估计算法

采用分层块匹配算法,支持精确匹配和部分匹配:

python 复制代码
def _motion_estimate(self, curr_q, ref_list, covered, h, w):
    mv_tokens = []
    sr = self.mv_search_range
    min_match = self.mv_min_match
    
    # 搜索偏移量(从中心向外扩展)
    offsets = self._search_offsets(sr)
    
    for bs in self.mv_block_sizes:  # 多尺度块大小
        by = 0
        while by + bs <= h:
            bx = 0
            while bx + bs <= w:
                # 检查块是否已被覆盖
                blk_cov = covered[by:by+bs, bx:bx+bs]
                if blk_cov.all():
                    bx += bs
                    continue
                
                # 块匹配逻辑
                curr_block = curr_q[by:by+bs, bx:bx+bs]
                best_exact = None
                best_partial = None
                
                for ri, ref_q in enumerate(ref_list):
                    if ref_q is None:
                        continue
                    for dx, dy in offsets:
                        # 边界检查
                        sx, sy = bx + dx, by + dy
                        if sx < 0 or sx + bs > w or sy < 0 or sy + bs > h:
                            continue
                        
                        src = ref_q[sy:sy+bs, sx:sx+bs]
                        # 精确匹配
                        if src.tobytes() == curr_block.tobytes():
                            best_exact = (ri, dx, dy)
                            break
                        # 部分匹配
                        elif min_match < 1.0:
                            n = int(np.sum(curr_block == src))
                            if n / (bs*bs) >= min_match:
                                best_partial = (ri, dx, dy, n)
                
                # 生成运动向量token
                if best_exact is not None:
                    ri, dx, dy = best_exact
                    mv_tokens.append(f"mv[{bx},{by},{bs},{dx},{dy},{ri}]")
                    covered[by:by+bs, bx:bx+bs] = True
                
                bx += bs
            by += bs
    return mv_tokens

3. 解码流程

解码器根据token类型重建视频帧:

python 复制代码
def decode(self, tokens, palette):
    vw = vh = fps = nf = 0
    frames = []
    frame_tokens = []
    frame_type = None
    prev_labels = None
    kf_labels = None
    
    for t in tokens:
        if t.startswith('video['):
            # 解析视频头信息
            m = re.match(r'video\[(\d+),(\d+),(\d+),(\d+)\]', t)
            if m:
                vw, vh = int(m.group(1)), int(m.group(2))
                fps, nf = int(m.group(3)), int(m.group(4))
        
        elif t.startswith('kf[') or t.startswith('pf['):
            # 处理帧边界
            if frame_type is not None:
                labels = self._decode_frame(
                    frame_type, frame_tokens, palette,
                    palette_token, vw, vh, prev_labels, kf_labels)
                frames.append(self._to_rgb(labels, palette))
                prev_labels = labels
                if frame_type == 'kf':
                    kf_labels = labels
            
            frame_type = 'kf' if t.startswith('kf') else 'pf'
            frame_tokens = []
        
        elif frame_type is not None:
            frame_tokens.append(t)
    
    return frames

4. 序列化与压缩

支持将token序列压缩为二进制格式:

python 复制代码
def encode_to_bytes(self, video_path, max_frames=48, width=320, height=180):
    tokens, quantized, palette, fps = self.encode(
        video_path, max_frames, width, height)
    token_str = '\n'.join(tokens).encode('utf-8')
    compressed = zlib.compress(token_str, 9)
    palette_bytes = np.array(palette, dtype=np.uint8).tobytes()
    header = np.array([len(palette_bytes), len(compressed)], 
                      dtype=np.int32).tobytes()
    return header + palette_bytes + compressed, quantized, fps

实验结果

实验设置

  • 测试视频:survival-ancient.mp4(256×144分辨率)
  • 帧数:24帧
  • 关键帧间隔:12帧
  • 最大颜色数:64
  • 块大小:[2, 4]

性能指标

典型的实验结果如下:

复制代码
============================================================
  Video Block Tokenizer Statistics
============================================================
  Resolution: 256x144  FPS: 30  Frames: 24
  Keyframes: 2  P-frames: 22
  Total tokens: 847
----------------------------------------------------------------
  Original: 1,327,104B  Token: 6,776B  Ratio: 195.9x
  Pixel accuracy: 99.9904%  (37/331776 mismatch)
============================================================

技术特点

优点

  1. 高压缩比:通过运动估计和差分编码实现高压缩率
  2. 语义保持:基于色块的表示保留了视觉语义
  3. 可调节参数:支持自定义关键帧间隔、块大小等参数
  4. 无损重建:在颜色量化后的空间中可以无损重建

局限性

  1. 计算复杂度:运动估计需要较多计算
  2. 颜色限制:固定调色板可能损失色彩细节
  3. 块效应:大块可能导致明显的块边界

应用场景

  1. 轻量级视频存储:资源受限环境下的视频存储
  2. 视频分析预处理:为视频分析任务提供紧凑表示
  3. 边缘计算:低带宽环境下的视频传输
  4. 教育演示:视频编码原理的教学工具

未来工作

  1. 自适应调色板:支持场景变化的动态调色板
  2. 更高效的运动估计:使用更智能的搜索策略
  3. 多参考帧:支持多个参考帧以提高压缩效率
  4. GPU加速:利用GPU并行化运动估计
  5. 深度学习集成:结合神经网络提升压缩性能

结论

VideoBlockTokenizer提供了一种基于色块语义的视频压缩方法,在保持视觉质量的同时实现了高压缩比。其模块化设计和可调节参数使其适用于多种应用场景。虽然与专业视频编码标准相比在压缩效率上仍有差距,但其简洁的实现和良好的可解释性使其成为研究和教学的有价值工具。


完整代码实现请参考开源的VideoBlockTokenizer项目。该工具使用Python实现,依赖OpenCV、NumPy、scikit-learn等库,适合用于视频压缩的学习和研究。

python 复制代码
"""
VideoBlockTokenizer --- 视频色块语义token化器
═════════════════════════════════════════════
在 ColorBlockTokenizer 基础上增加时间维压缩:
  - keyframe (kf): 完整帧编码,复用图像tokenizer全部语义
  - pframe (pf):   帧间差分编码,只编码变化区域
  - motion (mv):   块级运动估计 --- 精确匹配前帧块
"""

import cv2
import numpy as np
import re
import os
import zlib
import matplotlib.pyplot as plt
from typing import List, Tuple, Dict
from sklearn.cluster import KMeans
from sample1 import ColorBlockTokenizer, _psnr


class VideoBlockTokenizer:

    def __init__(self, max_colors=64, keyframe_interval=24,
                 tile_sizes=None, mv_block_sizes=None, mv_search_range=16,
                 mv_min_match=0.5):
        self.max_colors = max_colors
        self.keyframe_interval = keyframe_interval
        self.tile_sizes = tile_sizes or [2, 4, 8, 16]
        self.mv_block_sizes = mv_block_sizes if mv_block_sizes is not None else [8, 4]
        self.mv_search_range = mv_search_range
        self.mv_min_match = mv_min_match
        self._img_tok = ColorBlockTokenizer(
            max_colors=max_colors, tile_sizes=tile_sizes,
            use_gradients=False, use_delta=False)

    # ════════════════════════════════════════
    #  编码
    # ════════════════════════════════════════

    def encode(self, video_path, max_frames=48, width=320, height=180):
        frames, fps = self._read_frames(video_path, max_frames, width, height)
        n_frames = len(frames)
        h, w = frames[0].shape[:2]

        palette = self._compute_shared_palette(frames)
        quantized = [self._img_tok._assign_palette(f, palette) for f in frames]

        tokens = [f"video[{w},{h},{int(fps)},{n_frames}]"]
        pal_hex = ','.join(
            f'{(int(r)<<16)|(int(g)<<8)|int(b):06x}' for r, g, b in palette)
        tokens.append(f"palette[{pal_hex}]")

        refs = {}
        prev_q = None
        for i, q in enumerate(quantized):
            if i % self.keyframe_interval == 0:
                tokens.append(f"kf[{i}]")
                self._encode_keyframe(q, h, w, palette, tokens)
                refs['kf'] = q
            else:
                tokens.append(f"pf[{i}]")
                self._encode_pframe(q, prev_q, refs.get('kf'), h, w, tokens)
            prev_q = q

        return tokens, quantized, palette, fps

    def _read_frames(self, video_path, max_frames, width, height):
        cap = cv2.VideoCapture(video_path)
        fps = cap.get(cv2.CAP_PROP_FPS)
        frames = []
        for _ in range(max_frames):
            ret, frame = cap.read()
            if not ret:
                break
            frame = cv2.cvtColor(frame, cv2.COLOR_BGR2RGB)
            frame = cv2.resize(frame, (width, height),
                               interpolation=cv2.INTER_AREA)
            frames.append(frame)
        cap.release()
        return frames, fps

    def _compute_shared_palette(self, frames):
        all_pixels = []
        step = max(1, len(frames) // 8)
        for i in range(0, len(frames), step):
            f = frames[i]
            pixels = f.reshape(-1, 3).astype(np.float32)
            idx = np.random.choice(len(pixels),
                                   min(5000, len(pixels)), replace=False)
            all_pixels.append(pixels[idx])
        all_pixels = np.vstack(all_pixels)
        n_c = min(self.max_colors, len(all_pixels))
        km = KMeans(n_clusters=n_c, random_state=42, n_init=10)
        km.fit(all_pixels)
        centers = km.cluster_centers_.astype(np.uint8)
        palette = [centers[i] for i in range(len(centers))]
        seen = set()
        unique = []
        for p in palette:
            k = tuple(p)
            if k not in seen:
                seen.add(k)
                unique.append(p)
        return unique

    # ──────────── 关键帧编码 ────────────

    def _encode_keyframe(self, q, h, w, palette, tokens):
        rgb = self._to_rgb(q, palette)
        kf_tokens, _, _ = self._img_tok.encode(rgb, palette=palette)
        tokens.extend(kf_tokens[2:])

    # ──────────── 预测帧编码 ────────────

    def _encode_pframe(self, curr_q, prev_q, kf_q, h, w, tokens):
        covered = (curr_q == prev_q)

        mv_tokens = self._motion_estimate(
            curr_q, [prev_q, kf_q], covered, h, w)
        tokens.extend(mv_tokens)

        self._img_tok._encode_remaining(curr_q, covered, h, w, tokens)

    def _motion_estimate(self, curr_q, ref_list, covered, h, w):
        mv_tokens = []
        sr = self.mv_search_range
        min_match = self.mv_min_match

        offsets = self._search_offsets(sr)

        for bs in self.mv_block_sizes:
            by = 0
            while by + bs <= h:
                bx = 0
                while bx + bs <= w:
                    blk_cov = covered[by:by+bs, bx:bx+bs]
                    if blk_cov.all():
                        bx += bs
                        continue

                    all_uncovered = not blk_cov.any()
                    curr_block = curr_q[by:by+bs, bx:bx+bs]

                    best_exact = None
                    best_partial = None
                    best_partial_n = 0

                    for ri, ref_q in enumerate(ref_list):
                        if ref_q is None:
                            continue
                        for dx, dy in offsets:
                            sx, sy = bx + dx, by + dy
                            if sx < 0 or sx + bs > w or sy < 0 or sy + bs > h:
                                continue
                            src = ref_q[sy:sy+bs, sx:sx+bs]
                            if src.tobytes() == curr_block.tobytes():
                                if best_exact is None:
                                    best_exact = (ri, dx, dy)
                                break
                            elif all_uncovered and min_match < 1.0:
                                n = int(np.sum(curr_block == src))
                                if n > best_partial_n:
                                    best_partial_n = n
                                    best_partial = (ri, dx, dy, n)
                        if best_exact is not None:
                            break

                    if best_exact is not None:
                        ri, dx, dy = best_exact
                        mv_tokens.append(f"mv[{bx},{by},{bs},{dx},{dy},{ri}]")
                        covered[by:by+bs, bx:bx+bs] = True
                    elif (best_partial is not None
                          and best_partial_n / (bs*bs) >= min_match):
                        ri, dx, dy, n = best_partial
                        mv_tokens.append(f"mv[{bx},{by},{bs},{dx},{dy},{ri}]")
                        src = ref_list[ri][by+dy:by+dy+bs, bx+dx:bx+dx+bs]
                        match_mask = (curr_block == src)
                        covered[by:by+bs, bx:bx+bs] |= match_mask

                    bx += bs
                by += bs

        return mv_tokens

    def _search_offsets(self, sr):
        offsets = [(0, 0)]
        for dist in range(1, sr + 1):
            ring = []
            for d in range(-dist, dist + 1):
                ring.extend([(d, -dist), (d, dist), (-dist, d), (dist, d)])
            seen = {(0, 0)}
            for dx, dy in ring:
                if (dx, dy) not in seen:
                    seen.add((dx, dy))
                    offsets.append((dx, dy))
        return offsets

    # ════════════════════════════════════════
    #  解码
    # ════════════════════════════════════════

    def decode(self, tokens, palette):
        vw = vh = fps = nf = 0
        frames = []
        frame_tokens = []
        frame_type = None
        prev_labels = None
        kf_labels = None

        pal_hex = ','.join(
            f'{(int(r)<<16)|(int(g)<<8)|int(b):06x}' for r, g, b in palette)
        palette_token = f"palette[{pal_hex}]"

        for t in tokens:
            if t.startswith('video['):
                m = re.match(r'video\[(\d+),(\d+),(\d+),(\d+)\]', t)
                if m:
                    vw, vh = int(m.group(1)), int(m.group(2))
                    fps, nf = int(m.group(3)), int(m.group(4))

            elif t.startswith('kf[') or t.startswith('pf['):
                if frame_type is not None:
                    labels = self._decode_frame(
                        frame_type, frame_tokens, palette,
                        palette_token, vw, vh, prev_labels, kf_labels)
                    frames.append(self._to_rgb(labels, palette))
                    prev_labels = labels
                    if frame_type == 'kf':
                        kf_labels = labels

                frame_type = 'kf' if t.startswith('kf') else 'pf'
                frame_tokens = []

            elif frame_type is not None:
                frame_tokens.append(t)

        if frame_type is not None:
            labels = self._decode_frame(
                frame_type, frame_tokens, palette,
                palette_token, vw, vh, prev_labels, kf_labels)
            frames.append(self._to_rgb(labels, palette))

        return frames

    def _decode_frame(self, frame_type, frame_tokens, palette,
                      palette_token, vw, vh, prev_labels, kf_labels=None):
        synthetic = [f"canvas[{vw},{vh}]", palette_token] + frame_tokens

        if frame_type == 'kf':
            rgb = self._img_tok.decode(synthetic, palette)
            return self._rgb_to_labels(rgb, palette)
        else:
            init = prev_labels.copy() if prev_labels is not None else None
            ref_kf = kf_labels.copy() if kf_labels is not None else None
            rgb = self._img_tok.decode(synthetic, palette,
                                       initial_labels=init,
                                       ref_kf_labels=ref_kf)
            return self._rgb_to_labels(rgb, palette)

    # ════════════════════════════════════════
    #  工具
    # ════════════════════════════════════════

    def _to_rgb(self, labels, palette):
        if labels is None:
            return np.zeros((1, 1, 3), dtype=np.uint8)
        h, w = labels.shape
        rgb = np.zeros((h, w, 3), dtype=np.uint8)
        for c in range(len(palette)):
            rgb[labels == c] = palette[c]
        return rgb

    def _rgb_to_labels(self, rgb, palette):
        return self._img_tok._assign_palette(rgb, palette)

    # ════════════════════════════════════════
    #  序列化
    # ════════════════════════════════════════

    def encode_to_bytes(self, video_path, max_frames=48, width=320, height=180):
        tokens, quantized, palette, fps = self.encode(
            video_path, max_frames, width, height)
        token_str = '\n'.join(tokens).encode('utf-8')
        compressed = zlib.compress(token_str, 9)
        palette_bytes = np.array(palette, dtype=np.uint8).tobytes()
        header = np.array([len(palette_bytes), len(compressed)], dtype=np.int32).tobytes()
        return header + palette_bytes + compressed, quantized, fps

    def decode_from_bytes(self, data, vw, vh):
        header_size = 8
        pal_len = int.from_bytes(data[:4], 'little')
        comp_len = int.from_bytes(data[4:8], 'little')
        palette_flat = np.frombuffer(data[header_size:header_size + pal_len],
                                     dtype=np.uint8).reshape(-1, 3)
        palette = [palette_flat[i] for i in range(len(palette_flat))]
        compressed = data[header_size + pal_len:header_size + pal_len + comp_len]
        token_str = zlib.decompress(compressed).decode('utf-8')
        tokens = token_str.split('\n')
        return self.decode(tokens, palette), palette

    # ════════════════════════════════════════
    #  统计
    # ════════════════════════════════════════

    def stats(self, tokens, quantized_frames, palette, fps):
        s = {
            'n_frames': len(quantized_frames),
            'fps': fps,
            'h': quantized_frames[0].shape[0],
            'w': quantized_frames[0].shape[1],
            'total_tokens': len(tokens),
            'keyframes': 0,
            'pframes': 0,
            'original_bytes': sum(q.shape[0] * q.shape[1] * 3
                                  for q in quantized_frames),
        }
        s['token_bytes'] = len('\n'.join(tokens).encode('utf-8'))
        s['compression_ratio'] = (s['original_bytes']
                                  / max(1, s['token_bytes']))

        for t in tokens:
            if t.startswith('kf['):
                s['keyframes'] += 1
            elif t.startswith('pf['):
                s['pframes'] += 1

        return s


# ════════════════════════════════════════
#  可视化
# ════════════════════════════════════════

def display_video_comparison(frames_orig, frames_recon, n_show=6):
    n = min(n_show, len(frames_orig))
    step = max(1, len(frames_orig) // n)
    indices = [i * step for i in range(n)]
    if indices[-1] != len(frames_orig) - 1:
        indices[-1] = len(frames_orig) - 1

    fig, axes = plt.subplots(2, n, figsize=(4 * n, 8))
    for col, idx in enumerate(indices):
        axes[0, col].imshow(frames_orig[idx])
        axes[0, col].set_title(f'Frame {idx}', fontsize=9)
        axes[0, col].axis('off')
        psnr_val = _psnr(frames_orig[idx], frames_recon[idx])
        axes[1, col].imshow(frames_recon[idx])
        axes[1, col].set_title(f'PSNR: {psnr_val:.1f}dB', fontsize=9)
        axes[1, col].axis('off')
    axes[0, 0].set_ylabel('Quantized', fontsize=10)
    axes[1, 0].set_ylabel('Reconstructed', fontsize=10)
    plt.suptitle('Video Block Tokenizer', fontsize=13, fontweight='bold')
    plt.tight_layout()
    plt.show()


def print_video_stats(s, mismatch_list):
    print(f"\n{'=' * 60}")
    print(f"  Video Block Tokenizer Statistics")
    print(f"{'=' * 60}")
    print(f"  Resolution: {s['w']}x{s['h']}  FPS: {s['fps']}"
          f"  Frames: {s['n_frames']}")
    print(f"  Keyframes: {s['keyframes']}  P-frames: {s['pframes']}")
    print(f"  Total tokens: {s['total_tokens']}")
    print(f"{'-' * 60}")
    print(f"  Original: {s['original_bytes']:,}B"
          f"  Token: {s['token_bytes']:,}B"
          f"  Ratio: {s['compression_ratio']:.1f}x")

    total_px = sum(m[1] for m in mismatch_list)
    total_mismatch = sum(m[0] for m in mismatch_list)
    if total_px > 0:
        print(f"  Pixel accuracy: "
              f"{100 * (1 - total_mismatch / total_px):.4f}%"
              f"  ({total_mismatch}/{total_px} mismatch)")
    print(f"{'=' * 60}")


# ════════════════════════════════════════
#  主流程
# ════════════════════════════════════════

if __name__ == "__main__":
    video_path = os.path.join(
        os.path.dirname(os.path.abspath(__file__)),
        "survival-ancient.mp4")
    if not os.path.exists(video_path):
        print(f"Not found: {video_path}")
        exit(1)

    vtok = VideoBlockTokenizer(
        max_colors=64,
        keyframe_interval=12,
        tile_sizes=[2, 4])

    print("Encoding...")
    tokens, quantized, palette, fps = vtok.encode(
        video_path, max_frames=24, width=256, height=144)
    print(f"Encoded {len(quantized)} frames, {len(tokens)} tokens")

    print("Decoding...")
    reconstructed = vtok.decode(tokens, palette)
    print(f"Decoded {len(reconstructed)} frames")

    q_frames = [vtok._to_rgb(q, palette) for q in quantized]

    mismatch = []
    for i in range(len(quantized)):
        r_labels = vtok._rgb_to_labels(reconstructed[i], palette)
        d = int(np.sum(quantized[i] != r_labels))
        total = quantized[i].shape[0] * quantized[i].shape[1]
        mismatch.append((d, total))
        if d > 0:
            print(f"  Frame {i}: {d}/{total} mismatch")

    s = vtok.stats(tokens, quantized, palette, fps)
    print_video_stats(s, mismatch)

    display_video_comparison(q_frames, reconstructed, n_show=6)
相关推荐
Black蜡笔小新2 小时前
国标GB28181之后,视频监控EasyCVR的下一个“统一战场”在哪里?
音视频
沃虎Chinty-033 小时前
音频变压器选型与应用:三大核心功能深度解析
音视频
互联科技报4 小时前
2026年第一季度短视频矩阵视频混剪头部工具市场动态深度解析
人工智能·矩阵·音视频
Digitally4 小时前
如何将 iPad 上的视频无损传输到 Mac
macos·音视频·ipad
AI2512245 小时前
AI视频生成技术解析:主流软件原理与选型指南
人工智能·音视频
ting94520006 小时前
微软 VibeVoice 万字深度解析:从原理、架构、部署到行业落地,重新定义长音频 AI
人工智能·架构·音视频
山楂树の6 小时前
H.265 (HEVC) 视频解码转逐帧图像 完整实现方案
学习·音视频·h.265
大强同学7 小时前
用Claude Code把一篇文章自动做成视频,全程不用碰剪辑软件
音视频
郭源潮17 小时前
从8k嘈杂到16k清晰,我是如何使用RNNoise+libresample构建音频降噪管道的?
c++·音视频·实时音视频