macOS 屏幕录制开发完全指南:ScreenCaptureKit与音频采集实战

macOS 屏幕录制开发完全指南:ScreenCaptureKit与音频采集实战

一、前言

macOS平台的屏幕录制开发与Windows有很大差异。Windows平台我们可以使用DirectShow、Media Foundation、Windows Graphics Capture等多种技术方案,而macOS则主要依赖ScreenCaptureKit(iOS使用ReplayKit)。本文将从零开始,详细介绍macOS屏幕录制的完整开发流程。

二、技术选型:为什么选择ScreenCaptureKit

2.1 macOS屏幕录制技术演进

技术方案 系统要求 特点 推荐度
CGWindowListCopyWindowInfo + CGImage macOS 10.0+ 截图方式,性能差
AVCaptureScreenInput macOS 10.7+ 基于AVFoundation,功能有限 ⭐⭐
ScreenCaptureKit macOS 12.3+ 官方推荐,功能完善 ⭐⭐⭐⭐⭐

2.2 ScreenCaptureKit优势

ScreenCaptureKit是Apple在WWDC 2021推出的新一代屏幕捕获框架,相比传统方案有显著优势:

  1. 性能优异:直接从WindowServer获取帧数据,避免多次拷贝
  2. 隐私友好:内置权限管理,用户可选择性授权
  3. 功能完善:支持窗口过滤、排除自身窗口、多显示器等
  4. 音频支持:原生支持系统音频和麦克风捕获
  5. HDR支持:支持HDR内容捕获
swift 复制代码
import ScreenCaptureKit

// ScreenCaptureKit核心类关系
// SCShareableContent - 可共享的内容(显示器、窗口、应用程序)
// SCDisplay - 显示器对象
// SCWindow - 窗口对象
// SCRunningApplication - 运行中的应用
// SCStream - 捕获流
// SCStreamConfiguration - 流配置
// SCStreamOutput - 输出代理

三、macOS权限申请机制详解

3.1 屏幕录制权限

macOS要求应用获取屏幕录制权限才能捕获屏幕内容。这是系统级的隐私保护机制。

Info.plist配置
xml 复制代码
<!-- 必须配置的权限描述 -->
<key>NSScreenCaptureUsageDescription</key>
<string>需要屏幕录制权限以进行屏幕录制</string>
权限检查与申请
swift 复制代码
import ScreenCaptureKit

class PermissionManager {
    /// 检查屏幕录制权限状态
    static func checkScreenRecordingPermission() async -> Bool {
        do {
            // 尝试获取可共享内容,如果无权限会抛出错误
            let content = try await SCShareableContent.excludingDesktopWindows(
                false,
                onScreenWindowsOnly: true
            )
            return content.displays.count > 0 || content.windows.count > 0
        } catch {
            print("屏幕录制权限检查失败: \(error)")
            return false
        }
    }
    
    /// 请求屏幕录制权限
    static func requestScreenRecordingPermission() async -> Bool {
        do {
            _ = try await SCShareableContent.excludingDesktopWindows(
                false,
                onScreenWindowsOnly: true
            )
            return true
        } catch let error as SCStreamError {
            if error.code == .userDeclined {
                // 用户拒绝了权限,引导用户去系统设置
                openSystemPreferences()
            }
            return false
        } catch {
            return false
        }
    }
    
    /// 打开系统偏好设置的隐私面板
    static func openSystemPreferences() {
        if let url = URL(string: "x-apple.systempreferences:com.apple.preference.security?Privacy_ScreenCapture") {
            NSWorkspace.shared.open(url)
        }
    }
}

3.2 麦克风权限

如果需要采集麦克风音频,还需要申请麦克风权限:

xml 复制代码
<!-- Info.plist -->
<key>NSMicrophoneUsageDescription</key>
<string>需要麦克风权限以录制音频</string>
swift 复制代码
import AVFoundation

class MicrophonePermissionManager {
    static func checkMicrophonePermission() -> AVAuthorizationStatus {
        return AVCaptureDevice.authorizationStatus(for: .audio)
    }
    
    static func requestMicrophonePermission() async -> Bool {
        return await withCheckedContinuation { continuation in
            AVCaptureDevice.requestAccess(for: .audio) { granted in
                continuation.resume(returning: granted)
            }
        }
    }
}

3.3 权限状态处理最佳实践

swift 复制代码
class PermissionState {
    static let shared = PermissionState()
    
    var screenRecordingGranted = false
    var microphoneGranted = false
    
    func checkAllPermissions() async {
        screenRecordingGranted = await PermissionManager.checkScreenRecordingPermission()
        microphoneGranted = MicrophonePermissionManager.checkMicrophonePermission() == .authorized
    }
    
    /// 检查结果枚举
    enum PermissionCheckResult {
        case allGranted
        case screenRecordingDenied
        case microphoneDenied
        case bothDenied
    }
}

四、ScreenCaptureKit完整使用流程

4.1 获取可共享内容

第一步是获取系统中可以被捕获的内容,包括显示器、窗口和应用程序:

swift 复制代码
import ScreenCaptureKit

class ScreenCaptureManager: NSObject, @unchecked SCStreamOutput {
    private var stream: SCStream?
    
    /// 获取可共享内容
    func getShareableContent() async throws -> SCShareableContent {
        let content = try await SCShareableContent.excludingDesktopWindows(
            false,  // 是否排除桌面窗口
            onScreenWindowsOnly: true  // 仅获取屏幕上的窗口
        )
        
        // 打印所有显示器
        print("=== 显示器列表 ===")
        for display in content.displays {
            print("显示器: \(display.displayID), 宽度: \(display.width), 高度: \(display.height)")
        }
        
        // 打印所有窗口
        print("\n=== 窗口列表 ===")
        for window in content.windows {
            print("窗口: \(window.title ?? "无标题"), 所属应用: \(window.owningApplication?.applicationName ?? "未知")")
        }
        
        return content
    }
}

4.2 配置捕获流

创建SCStreamConfiguration配置捕获参数:

swift 复制代码
extension ScreenCaptureManager {
    /// 创建流配置
    func createStreamConfiguration(width: Int, height: Int) -> SCStreamConfiguration {
        let config = SCStreamConfiguration()
        
        // 基础视频配置
        config.width = width
        config.height = height
        config.sourceRect = CGRect(x: 0, y: 0, width: width, height: height)
        
        // 帧率和画质
        config.minimumFrameInterval = CMTime(value: 1, timescale: 30) // 30fps
        config.queueDepth = 5  // 缓冲队列深度
        
        // 颜色空间配置
        if #available(macOS 12.0, *) {
            config.backgroundColor = .black
        }
        
        // 像素格式
        if #available(macOS 13.0, *) {
            // 使用BGRA格式,兼容性好
            config.pixelFormat = kCVPixelFormatType_32BGRA
        }
        
        // 音频配置(macOS 14.0+)
        if #available(macOS 14.0, *) {
            config.capturesAudio = true
            config.sampleRate = 48000
            config.channelCount = 2
        }
        
        return config
    }
}

4.3 创建并启动捕获流

swift 复制代码
extension ScreenCaptureManager {
    /// 开始捕获指定显示器
    func startCapture(display: SCDisplay, configuration: SCStreamConfiguration) async throws {
        // 创建流
        stream = SCStream(filter: SCContentFilter(display: display, excludingWindows: []),
                         configuration: configuration,
                         delegate: self)
        
        // 添加视频输出
        try stream?.addStreamOutput(self, type: .screen, sampleHandlerQueue: .global())
        
        // 添加音频输出(macOS 14.0+)
        if #available(macOS 14.0, *) {
            try stream?.addStreamOutput(self, type: .audio, sampleHandlerQueue: .global())
        }
        
        // 启动捕获
        try await stream?.startCapture()
        print("屏幕捕获已启动")
    }
    
    /// 停止捕获
    func stopCapture() async {
        await stream?.stopCapture()
        stream = nil
        print("屏幕捕获已停止")
    }
}

// MARK: - SCStreamOutput 代理方法
extension ScreenCaptureManager {
    func stream(_ stream: SCStream, didOutputSampleBuffer sampleBuffer: CMSampleBuffer, of type: SCStreamOutputType) {
        switch type {
        case .screen:
            // 处理视频帧
            handleVideoFrame(sampleBuffer: sampleBuffer)
        case .audio:
            // 处理音频帧
            handleAudioFrame(sampleBuffer: sampleBuffer)
        @unknown default:
            break
        }
    }
    
    private func handleVideoFrame(sampleBuffer: CMSampleBuffer) {
        guard let pixelBuffer = CMSampleBufferGetImageBuffer(sampleBuffer) else { return }
        // 处理视频帧...
        // 可以转换为CIImage或NSImage进行进一步处理
        let ciImage = CIImage(cvPixelBuffer: pixelBuffer)
    }
    
    private func handleAudioFrame(sampleBuffer: CMSampleBuffer) {
        // 处理音频帧...
        // 可以使用AudioBufferList进行音频处理
    }
    
    // 错误处理
    func stream(_ stream: SCStream, didStopWithError error: Error) {
        print("捕获流停止,错误: \(error)")
    }
}

4.4 窗口过滤与排除

ScreenCaptureKit支持精确控制捕获内容:

swift 复制代码
extension ScreenCaptureManager {
    /// 创建排除自身窗口的内容过滤器
    func createContentFilterExcludingSelf(display: SCDisplay, ownWindow: SCWindow?) -> SCContentFilter {
        var excludingWindows: [SCWindow] = []
        
        if let ownWindow = ownWindow {
            excludingWindows.append(ownWindow)
        }
        
        return SCContentFilter(display: display, excludingWindows: excludingWindows)
    }
    
    /// 仅捕获指定应用窗口
    func createContentFilterForApp(application: SCRunningApplication) -> SCContentFilter {
        let filter = SCContentFilter(desktopIndependentWindow: application)
        return filter
    }
    
    /// 捕获除指定应用外的所有内容
    func createContentFilterExcludingApps(display: SCDisplay, excludingApps: [SCRunningApplication]) -> SCContentFilter {
        let windowsToExclude = excludingApps.flatMap { $0.windows }
        return SCContentFilter(display: display, excludingWindows: windowsToExclude)
    }
}

五、AVFoundation屏幕捕获详解

虽然ScreenCaptureKit是推荐方案,但了解AVFoundation的屏幕捕获方式也有助于理解macOS的多媒体架构。

5.1 AVCaptureScreenInput基础

swift 复制代码
import AVFoundation

class AVScreenCapture: NSObject {
    private var captureSession: AVCaptureSession?
    private var screenInput: AVCaptureScreenInput?
    private var videoOutput: AVCaptureVideoDataOutput?
    
    func setupCapture(displayID: CGDirectDisplayID) throws {
        // 创建捕获会话
        captureSession = AVCaptureSession()
        
        // 创建屏幕输入
        screenInput = AVCaptureScreenInput(displayID: displayID)
        guard let screenInput = screenInput else {
            throw NSError(domain: "AVScreenCapture", code: -1, userInfo: [NSLocalizedDescriptionKey: "无法创建屏幕输入"])
        }
        
        // 配置输入参数
        screenInput.minFrameDuration = CMTime(value: 1, timescale: 30)
        screenInput.capturesCursor = true
        screenInput.capturesMouseClicks = true
        
        // 添加输入
        if captureSession!.canAddInput(screenInput) {
            captureSession!.addInput(screenInput)
        }
        
        // 创建视频输出
        videoOutput = AVCaptureVideoDataOutput()
        videoOutput?.videoSettings = [
            kCVPixelBufferPixelFormatTypeKey as String: kCVPixelFormatType_32BGRA
        ]
        videoOutput?.setSampleBufferDelegate(self, queue: DispatchQueue.global())
        
        if let output = videoOutput, captureSession!.canAddOutput(output) {
            captureSession!.addOutput(output)
        }
        
        // 开始捕获
        captureSession!.startRunning()
    }
}

extension AVScreenCapture: AVCaptureVideoDataOutputSampleBufferDelegate {
    func captureOutput(_ output: AVCaptureOutput, didOutput sampleBuffer: CMSampleBuffer, from connection: AVCaptureConnection) {
        // 处理视频帧
        guard let pixelBuffer = CMSampleBufferGetImageBuffer(sampleBuffer) else { return }
        // ...
    }
}

5.2 AVFoundation vs ScreenCaptureKit对比

特性 AVCaptureScreenInput ScreenCaptureKit
系统版本 macOS 10.7+ macOS 12.3+
音频捕获 需要单独实现 原生支持
窗口过滤 不支持 支持
隐私控制 粗粒度 细粒度
HDR支持 有限 完整支持
性能 一般 优秀

六、音频采集CoreAudio API详解

6.1 获取麦克风设备列表

方法一:AVFoundation方式(推荐)
swift 复制代码
import AVFoundation

/// 使用AVFoundation获取麦克风设备列表
func listMicrophonesAVFoundation() {
    let audioDevices = AVCaptureDevice.devices(for: .audio)
    for device in audioDevices {
        print("🎤 设备名称: \(device.localizedName)")
        print("🆔 唯一ID: \(device.uniqueID)")
        print("📋 型号ID: \(device.modelID ?? "未知")")
        print("🔗 连接状态: \(device.isConnected ? "已连接" : "未连接")")
        print("--------------------------------")
    }
}

说明

  • AVCaptureDevice.devices(for: .audio) 返回一个 [AVCaptureDevice] 数组,包含系统中所有可用的音频输入设备(如麦克风)
  • 在 macOS 15 及更高版本中,ScreenCaptureKit可以通过 uniqueID 来选择使用哪个设备进行录制
  • uniqueID 的值设置给 SCStreamConfiguration.microphoneCaptureDeviceID,ScreenCaptureKit 就会使用指定的麦克风采集音频
方法二:CoreAudio API方式(底层)

CoreAudio API 是非常底层、非常系统的音频API,可以直接操作音频硬件:

swift 复制代码
import CoreAudio

/// 音频设备结构体
public struct AudioDevice {
    public let deviceID: AudioDeviceID
    public let name: String
    public let isInput: Bool
    public let isOutput: Bool
    
    public init(deviceID: AudioDeviceID, name: String, isInput: Bool, isOutput: Bool) {
        self.deviceID = deviceID
        self.name = name
        self.isInput = isInput
        self.isOutput = isOutput
    }
}

class CoreAudioDeviceManager {
    
    /// 获取所有可用的麦克风设备列表
    public func getAvailableMicrophoneDevices() -> [AudioDevice] {
        var devices: [AudioDevice] = []
        
        // 获取音频设备列表大小
        var deviceListSize: UInt32 = 0
        var propertyAddress = AudioObjectPropertyAddress(
            mSelector: kAudioHardwarePropertyDevices,
            mScope: kAudioObjectPropertyScopeGlobal,
            mElement: kAudioObjectPropertyElementMain
        )
        var status = AudioObjectGetPropertyDataSize(
            AudioObjectID(kAudioObjectSystemObject),
            &propertyAddress,
            0,
            nil,
            &deviceListSize
        )
        
        guard status == noErr else {
            print("获取设备列表大小失败: \(status)")
            return devices
        }
        
        let deviceCount = Int(deviceListSize) / MemoryLayout<AudioDeviceID>.size
        var deviceIDs = [AudioDeviceID](repeating: 0, count: deviceCount)
        
        status = AudioObjectGetPropertyData(
            AudioObjectID(kAudioObjectSystemObject),
            &propertyAddress,
            0,
            nil,
            &deviceListSize,
            &deviceIDs
        )
        
        guard status == noErr else {
            print("获取设备列表失败: \(status)")
            return devices
        }
        
        // 检查每个设备,判断是否为输入设备
        for deviceID in deviceIDs {
            if let device = getDeviceInfo(deviceID: deviceID) {
                if device.isInput {
                    devices.append(device)
                }
            }
        }
        
        return devices
    }
    
    /// 获取设备详细信息
    private func getDeviceInfo(deviceID: AudioDeviceID) -> AudioDevice? {
        // 获取设备名称
        var nameSize: UInt32 = 0
        var namePropertyAddress = AudioObjectPropertyAddress(
            mSelector: kAudioDevicePropertyDeviceNameCFString,
            mScope: kAudioObjectPropertyScopeGlobal,
            mElement: kAudioObjectPropertyElementMain
        )
        var status = AudioObjectGetPropertyDataSize(
            deviceID,
            &namePropertyAddress,
            0,
            nil,
            &nameSize
        )
        
        guard status == noErr else { return nil }
        
        var deviceName: CFString?
        withUnsafeMutablePointer(to: &deviceName) { pointer in
            status = AudioObjectGetPropertyData(
                deviceID,
                &namePropertyAddress,
                0,
                nil,
                &nameSize,
                pointer
            )
        }
        
        guard status == noErr, let name = deviceName else { return nil }
        
        // 判断设备是否支持输入/输出流
        let isInput = hasStreams(deviceID: deviceID, scope: kAudioDevicePropertyScopeInput)
        let isOutput = hasStreams(deviceID: deviceID, scope: kAudioDevicePropertyScopeOutput)
        
        return AudioDevice(
            deviceID: deviceID,
            name: name as String,
            isInput: isInput,
            isOutput: isOutput
        )
    }
    
    /// 检查设备是否有指定方向的流
    private func hasStreams(deviceID: AudioDeviceID, scope: AudioObjectPropertyScope) -> Bool {
        var streamListSize: UInt32 = 0
        var streamPropertyAddress = AudioObjectPropertyAddress(
            mSelector: kAudioDevicePropertyStreams,
            mScope: scope,
            mElement: kAudioObjectPropertyElementMain
        )
        
        var status = AudioObjectGetPropertyDataSize(
            deviceID,
            &streamPropertyAddress,
            0,
            nil,
            &streamListSize
        )
        
        guard status == noErr else { return false }
        
        if streamListSize == 0 { return false }
        
        let streamCount = Int(streamListSize) / MemoryLayout<AudioStreamID>.size
        var streamIDs = [AudioStreamID](repeating: 0, count: streamCount)
        
        status = AudioObjectGetPropertyData(
            deviceID,
            &streamPropertyAddress,
            0,
            nil,
            &streamListSize,
            &streamIDs
        )
        
        return status == noErr && streamCount > 0
    }
}

6.2 CoreAudio设备监听

swift 复制代码
extension CoreAudioDeviceManager {
    /// 注册设备变化监听
    func registerDeviceChangeListener(callback: @escaping () -> Void) {
        var propertyAddress = AudioObjectPropertyAddress(
            mSelector: kAudioHardwarePropertyDevices,
            mScope: kAudioObjectPropertyScopeGlobal,
            mElement: kAudioObjectPropertyElementMain
        )
        
        AudioObjectAddPropertyListener(
            AudioObjectID(kAudioObjectSystemObject),
            &propertyAddress,
            { _, _ in
                // 设备列表发生变化
                DispatchQueue.main.async {
                    callback()
                }
                return noErr
            },
            nil
        )
    }
}

6.3 OrayVirtualAudioDevice说明

在获取音频设备时,可能会检测到「OrayVirtualAudioDevice」,这是一个虚拟音频设备。这个设备是向日葵远程控制软件安装的驱动程序,用于远程桌面场景下的音频传输。

作用

  • 向日葵安装此虚拟音频设备后,即使被控端没有物理麦克风,也能捕获系统声音
  • macOS 通过注册这个虚拟 CoreAudio 音频设备驱动(Audio HAL Plug-in),让系统看起来像有一个真实的麦克风设备

处理建议:在枚举设备时,可以通过设备名称过滤掉这类虚拟设备:

swift 复制代码
let validMicrophones = devices.filter { device in
    !device.name.contains("Virtual") && 
    !device.name.contains("Oray")
}

七、与iOS屏幕录制的差异

7.1 技术框架对比

平台 框架 系统版本 主要API
macOS ScreenCaptureKit macOS 12.3+ SCStream, SCShareableContent
iOS ReplayKit iOS 9.0+ RPScreenRecorder, RPBroadcastActivityViewController

7.2 架构差异

macOS (ScreenCaptureKit)

swift 复制代码
// 获取内容 → 配置流 → 添加输出 → 启动捕获
let content = try await SCShareableContent.excludingDesktopWindows(false, onScreenWindowsOnly: true)
let filter = SCContentFilter(display: content.displays[0], excludingWindows: [])
let stream = SCStream(filter: filter, configuration: config, delegate: self)
try stream.addStreamOutput(self, type: .screen, sampleHandlerQueue: queue)
try await stream.startCapture()

iOS (ReplayKit)

swift 复制代码
// 请求权限 → 开始录制 → 获取预览 → 停止并处理
RPScreenRecorder.shared().startRecording { error in
    // 处理错误
}
// 停止时获取预览控制器
RPScreenRecorder.shared().stopRecording { previewController, error in
    // 显示预览
}

7.3 主要差异点

  1. 权限模型

    • macOS:系统级权限,用户可在"系统偏好设置"中管理
    • iOS:应用内权限请求,首次使用时弹窗授权
  2. 后台录制

    • macOS:支持后台录制
    • iOS:限制较多,需要特殊配置Broadcast Extension
  3. 音频捕获

    • macOS:直接支持系统音频和麦克风
    • iOS:需要用户明确授权,系统音频捕获有限制
  4. 窗口过滤

    • macOS:可以排除特定窗口
    • iOS:不支持窗口级别的过滤
  5. 文件写入

    • macOS:可以直接写入文件
    • iOS:通常使用AssetWriter或上传到服务器

7.4 代码适配建议

swift 复制代码
#if os(macOS)
import ScreenCaptureKit
#else
import ReplayKit
#endif

class CrossPlatformScreenRecorder {
    #if os(macOS)
    private var stream: SCStream?
    #else
    private let recorder = RPScreenRecorder.shared()
    #endif
    
    func startRecording() async throws {
        #if os(macOS)
        // macOS实现
        let content = try await SCShareableContent.excludingDesktopWindows(false, onScreenWindowsOnly: true)
        // ...
        #else
        // iOS实现
        try await recorder.startRecording()
        #endif
    }
}

八、实际踩坑经验

8.1 常见问题及解决方案

问题1:权限获取后仍无法捕获
swift 复制代码
// 错误做法:权限检查后立即开始捕获
if await checkPermission() {
    startCapture() // 可能失败!
}

// 正确做法:等待权限生效
if await checkPermission() {
    // 延迟一小段时间让系统更新权限状态
    try await Task.sleep(nanoseconds: 100_000_000) // 100ms
    startCapture()
}
问题2:内存持续增长
swift 复制代码
// 问题代码:未正确释放CMSampleBuffer
func stream(_ stream: SCStream, didOutputSampleBuffer sampleBuffer: CMSampleBuffer, of type: SCStreamOutputType) {
    processFrame(sampleBuffer) // 没有释放
}

// 解决方案:使用autoreleasepool
func stream(_ stream: SCStream, didOutputSampleBuffer sampleBuffer: CMSampleBuffer, of type: SCStreamOutputType) {
    autoreleasepool {
        processFrame(sampleBuffer)
    }
}
问题3:高分辨率下性能问题
swift 复制代码
// 问题:4K显示器直接捕获导致性能问题
config.width = 3840
config.height = 2160

// 解决方案:按需缩放
let scaleFactor: CGFloat = 2.0
config.width = Int(display.width / scaleFactor)
config.height = Int(display.height / scaleFactor)
问题4:音频延迟
swift 复制代码
// 优化音频延迟
config.queueDepth = 3  // 减小队列深度
config.minimumFrameInterval = CMTime(value: 1, timescale: 60) // 提高帧率减少延迟

8.2 调试技巧

swift 复制代码
// 启用详细日志
extension ScreenCaptureManager: SCStreamDelegate {
    func stream(_ stream: SCStream, didStopWithError error: Error) {
        if let scError = error as? SCStreamError {
            switch scError.code {
            case .userDeclined:
                print("用户拒绝授权")
            case .noCaptureSource:
                print("没有可捕获的源")
            case .internal:
                print("内部错误: \(scError.localizedDescription)")
            default:
                print("其他错误: \(scError.code)")
            }
        }
    }
}

九、性能优化建议

9.1 视频捕获优化

swift 复制代码
func createOptimizedConfiguration(for display: SCDisplay) -> SCStreamConfiguration {
    let config = SCStreamConfiguration()
    
    // 根据显示器分辨率动态调整
    let targetWidth: Int
    let targetHeight: Int
    
    if display.width > 2560 {
        // 4K显示器,缩小到1080p
        let scale = CGFloat(display.height) / CGFloat(display.width)
        targetWidth = 1920
        targetHeight = Int(1920 * scale)
    } else {
        targetWidth = display.width
        targetHeight = display.height
    }
    
    config.width = targetWidth
    config.height = targetHeight
    
    // 动态帧率
    config.minimumFrameInterval = CMTime(value: 1, timescale: 30)
    
    // 优化队列深度
    config.queueDepth = 5
    
    // 启用硬件加速
    if #available(macOS 13.0, *) {
        config.colorSpaceName = CGColorSpace.sRGB
    }
    
    return config
}

9.2 音频捕获优化

swift 复制代码
// 音频配置优化
extension SCStreamConfiguration {
    func configureAudio(sampleRate: Double = 48000, channels: Int = 2) {
        if #available(macOS 14.0, *) {
            self.capturesAudio = true
            self.sampleRate = sampleRate
            self.channelCount = channels
            
            // 排除系统静音音效
            self.excludesCurrentProcessAudioFromCapture = true
        }
    }
}

9.3 内存管理优化

swift 复制代码
class FrameProcessor {
    private let processingQueue = DispatchQueue(label: "com.app.frameProcessor", qos: .userInitiated)
    private var frameCount = 0
    private let maxFrameBuffer = 30
    
    func processFrame(_ sampleBuffer: CMSampleBuffer) {
        processingQueue.async { [weak self] in
            autoreleasepool {
                guard let self = self else { return }
                
                // 限制缓冲帧数
                self.frameCount += 1
                if self.frameCount > self.maxFrameBuffer {
                    // 丢弃旧帧
                    return
                }
                
                // 处理帧
                self.doProcessFrame(sampleBuffer)
                
                self.frameCount -= 1
            }
        }
    }
}

9.4 多显示器场景优化

swift 复制代码
class MultiDisplayCapture {
    private var streams: [SCStream] = []
    
    func startCaptureAllDisplays(displays: [SCDisplay], configuration: SCStreamConfiguration) async throws {
        for display in displays {
            let stream = SCStream(
                filter: SCContentFilter(display: display, excludingWindows: []),
                configuration: configuration,
                delegate: self
            )
            try stream.addStreamOutput(self, type: .screen, sampleHandlerQueue: .global())
            try await stream.startCapture()
            streams.append(stream)
        }
    }
    
    func stopAllCaptures() async {
        for stream in streams {
            await stream.stopCapture()
        }
        streams.removeAll()
    }
}

十、完整示例代码

swift 复制代码
import ScreenCaptureKit
import AVFoundation
import CoreAudio

/// macOS屏幕录制完整示例
class MacScreenRecorder: NSObject, @unchecked SCStreamOutput, SCStreamDelegate {
    
    // MARK: - Properties
    private var stream: SCStream?
    private var isRecording = false
    
    // MARK: - Public Methods
    
    /// 开始录制
    func startRecording() async throws {
        // 1. 检查权限
        guard await checkPermissions() else {
            throw RecorderError.permissionDenied
        }
        
        // 2. 获取可共享内容
        let content = try await SCShareableContent.excludingDesktopWindows(false, onScreenWindowsOnly: true)
        
        guard let display = content.displays.first else {
            throw RecorderError.noDisplay
        }
        
        // 3. 创建配置
        let config = createConfiguration(for: display)
        
        // 4. 创建过滤器(排除自身窗口)
        let filter = SCContentFilter(display: display, excludingWindows: [])
        
        // 5. 创建并启动流
        stream = SCStream(filter: filter, configuration: config, delegate: self)
        
        try stream?.addStreamOutput(self, type: .screen, sampleHandlerQueue: .global())
        
        if #available(macOS 14.0, *) {
            try stream?.addStreamOutput(self, type: .audio, sampleHandlerQueue: .global())
        }
        
        try await stream?.startCapture()
        isRecording = true
        
        print("✅ 录制已开始")
    }
    
    /// 停止录制
    func stopRecording() async {
        await stream?.stopCapture()
        stream = nil
        isRecording = false
        print("⏹️ 录制已停止")
    }
    
    // MARK: - Private Methods
    
    private func checkPermissions() async -> Bool {
        // 检查屏幕录制权限
        do {
            _ = try await SCShareableContent.excludingDesktopWindows(false, onScreenWindowsOnly: true)
        } catch {
            print("❌ 屏幕录制权限未授权")
            return false
        }
        
        // 检查麦克风权限(如需音频)
        let micStatus = AVCaptureDevice.authorizationStatus(for: .audio)
        if micStatus != .authorized {
            print("⚠️ 麦克风权限未授权")
        }
        
        return true
    }
    
    private func createConfiguration(for display: SCDisplay) -> SCStreamConfiguration {
        let config = SCStreamConfiguration()
        
        // 视频
        config.width = display.width
        config.height = display.height
        config.minimumFrameInterval = CMTime(value: 1, timescale: 30)
        config.queueDepth = 5
        
        // 音频
        if #available(macOS 14.0, *) {
            config.capturesAudio = true
            config.sampleRate = 48000
            config.channelCount = 2
        }
        
        return config
    }
    
    // MARK: - SCStreamOutput
    
    func stream(_ stream: SCStream, didOutputSampleBuffer sampleBuffer: CMSampleBuffer, of type: SCStreamOutputType) {
        autoreleasepool {
            switch type {
            case .screen:
                handleVideoSampleBuffer(sampleBuffer)
            case .audio:
                handleAudioSampleBuffer(sampleBuffer)
            @unknown default:
                break
            }
        }
    }
    
    private func handleVideoSampleBuffer(_ sampleBuffer: CMSampleBuffer) {
        guard let pixelBuffer = CMSampleBufferGetImageBuffer(sampleBuffer) else { return }
        
        // 这里可以:
        // 1. 编码为H.264/H.265写入文件
        // 2. 推流到服务器
        // 3. 进行图像处理
        
        // 示例:简单统计帧率
        let timestamp = CMSampleBufferGetPresentationTimeStamp(sampleBuffer)
        // print("视频帧: \(timestamp.seconds)")
    }
    
    private func handleAudioSampleBuffer(_ sampleBuffer: CMSampleBuffer) {
        // 处理音频数据
        let audioBufferList = CMSampleBufferGetAudioBufferListWithRetainedBlockBuffer(sampleBuffer)
        // 进行音频编码或处理...
    }
    
    // MARK: - SCStreamDelegate
    
    func stream(_ stream: SCStream, didStopWithError error: Error) {
        print("❌ 录制错误: \(error.localizedDescription)")
        isRecording = false
    }
}

// MARK: - Error Types

enum RecorderError: Error {
    case permissionDenied
    case noDisplay
    case captureFailed(Error)
}

十一、总结

macOS屏幕录制开发相比Windows有其独特之处:

  1. 权限机制严格:必须在Info.plist声明用途,且用户可以随时撤销
  2. ScreenCaptureKit是首选:Apple官方推荐,功能完善,性能优秀
  3. 音频采集分离:麦克风设备可通过AVFoundation或CoreAudio枚举
  4. 与iOS差异大:框架不同,权限模型不同,需要分别适配

开发时建议:

  • 先处理权限,再开始捕获
  • 使用autoreleasepool管理内存
  • 根据显示器分辨率动态调整捕获尺寸
  • 合理设置队列深度平衡延迟和稳定性

参考资料

相关推荐
ShiLuoHeroKing12 小时前
Mole:面向专业用户的Mac系统清理开源方案
macos
轻口味12 小时前
HarmonyOS 6 NDK开发系列1:音视频播放能力介绍
华为·音视频·harmonyos
大模型实验室Lab4AI14 小时前
Demystifing Video Reasoning 论文简析
音视频
天上路人14 小时前
A-59F 多功能语音处理模组在本地会议系统扩音啸叫处理中的技术应用与性能分析
人工智能·神经网络·算法·硬件架构·音视频·语音识别·实时音视频
kingcjh9715 小时前
一、大模型视频生成实战:Wan2.1 本地部署全记录
深度学习·生成对抗网络·ai作画·音视频
枳实-叶17 小时前
50 道嵌入式音视频面试题
面试·职场和发展·音视频
淑子啦18 小时前
React录制视频和人脸识别
javascript·react.js·音视频
电商API&Tina19 小时前
比价 / 选品专用:京东 + 淘宝 核心接口实战(可直接复制运行)
大数据·数据库·人工智能·python·json·音视频
不才小强19 小时前
ScreenRecorder 源码分析
音视频