【实战干货】Vue3 + WebRTC + SIP + AI 实现全自动语音接警系统(远程流获取+实时ASR+TTS回播)

【实战干货】Vue3 + WebRTC + SIP + AI 实现全自动语音接警系统(远程流获取+实时ASR+TTS回播)

大家好,我是前端开发工程师。

今天给大家带来一篇企业级、可直接上线 的实战教程------基于Vue3 + TwSIP + WebRTC + AI ,从零实现一套全自动AI语音接警系统

本文彻底讲透:
PC端拨打SIP电话 → WebRTC获取远程语音流 → 实时推AI转文字 → AI处理 → 语音合成回播给对方

全流程原理 + 代码实现 + 难点解决方案,全部是生产环境可用的代码


前言

在应急指挥、警务接警、智能客服、热线电话系统中,AI全自动语音交互已经是标配能力。

但前端开发者常遇到这些灵魂拷问:

  • 网页怎么打电话?
  • WebRTC 怎么拿到对方的语音流?
  • 怎么把语音实时送给AI转文字?
  • 怎么把AI语音播回通话里?
  • 延迟、杂音、断连、浏览器限制怎么解决?

本篇一次性全部讲清!


一、整体业务流程(AI接警闭环)

复制代码
报警人拨打电话 → 接通通话
  ↓
报警人说话 → WebRTC 远程语音流
  ↓
实时采集 → 转16K PCM → 推AI ASR → 转文字
  ↓
大模型理解警情、提取地址/事件/等级
  ↓
生成回答文字 → TTS 语音合成
  ↓
替换WebRTC轨道 → AI语音回播给报警人
  ↓
完成一次全自动AI接警

本文基于以下技术栈:

  • Vue3 + TS + Composition API
  • TwSIP(天维尔SIP语音SDK)
  • WebRTC(实时音视频)
  • Web Audio API(音频采集、处理)
  • 流式ASR(实时语音转文字)
  • 大模型(意图理解)
  • 流式TTS(文字转语音)

二、第一步:拨打SIP电话(建立WebRTC通话)

1. 核心Hook:useTwSip

我们先初始化SIP配置:

ts 复制代码
const [sipState, sipMethods] = useTwSip({
  account: '坐席号码',
  pass: '密码',
  auto: true,
  localVideoRef: localVideoRef,
  remoteVideoRef: remoteVideoRef
})

2. 发起呼叫

ts 复制代码
// 音频呼叫
sipMethods.makeCall('目标号码', 'audio')

// 视频呼叫
// sipMethods.makeCall('目标号码', 'video')

3. 底层发生了什么?

  1. SIP信令发送到服务端
  2. 服务端转发、路由
  3. 自动建立WebRTC PeerConnection
  4. 媒体通道打通 → 声音开始传输

只要通话建立,就一定有两个流:

  • localStream:本地麦克风
  • remoteStream对方语音流(报警人声音)

三、第二步:WebRTC如何获取「远程语音流」?(核心)

这是90%新手最困惑的地方。

1. 标准原理:ontrack 事件

所有WebRTC通话,对方声音一定从这里来

ts 复制代码
peerConnection.ontrack = (event) => {
  // ✅ 这就是报警人实时语音流
  const remoteStream = event.streams[0]

  // 可以直接播放
  const audio = document.createElement('audio')
  audio.srcObject = remoteStream
  audio.play()
}

2. 我们的Hook中如何拿到?

useTwSip.ts中:

ts 复制代码
sessionStateChange: async ({ state, callId }) => {
  if (state === 'Established') {
    const session = twSip.value?.callList.find(c => c.id === callId)
    if (session) {
      // ✅ 远程流拿到了!
      remoteStream.value = session.remoteMedia?.stream
      localStream.value = session.localMedia?.stream
    }
    callStatus.value = 'inCall'
  }
}

拿到 remoteStream.value = 报警人实时说话的声音


四、第三步:把远程流 → 实时PCM → 推AI转文字(难点)

为什么这是难点?

  • WebRTC流是加密、压缩
  • 无法直接读取二进制
  • AI只认:16000采样率、单声道、16bit PCM

解决方案:Web Audio API

我们使用useMediaRecorder.ts实现实时采集远程流

1. 初始化采集器

ts 复制代码
const [recorderState, recorderMethods] = useMediaRecorder('audio', {
  sampleRate: 16000, // AI标准采样率
  bufferSize: 2048
})

2. 绑定远程流

ts 复制代码
recorderMethods.setStream(sipState.remoteStream)

3. 启动实时PCM输出(对接ASR)

ts 复制代码
recorderMethods.startRealTimePCMStream((pcmData: Int16Array) => {
  // ✅ 这里直接拿到 16K 单声道 16bit PCM
  // 直接发送给AI语音识别
  ASR.send(pcmData)
})

4. 底层采集原理(精简版)

ts 复制代码
// 1. 创建音频上下文
const audioContext = new AudioContext({ sampleRate: 16000 })

// 2. 把远程流插进去
const source = audioContext.createMediaStreamSource(remoteStream)

// 3. 实时读取音频数据
const processor = audioContext.createScriptProcessor(4096, 1, 1)
processor.onaudioprocess = (e) => {
  const data = e.inputBuffer.getChannelData(0)
  // 转PCM
  const pcm = float32To16BitPCM(data)
  // 发送给AI
}

五、第四步:AI处理流程(ASR → LLM → TTS)

1. 流式ASR:语音 → 文字

复制代码
实时PCM → 阿里云/百度/讯飞 ASR → 实时返回文字

示例:
输入: "我家着火了,在XX小区3栋"
输出: "我家着火了,在XX小区3栋"

2. 大模型处理:警情提取 + 回答生成

复制代码
输入:报警文本
输出:
- 警情类型:火灾
- 地址:XX小区3栋
- 回复:已收到警情,请保持电话畅通

3. 流式TTS:文字 → 音频

复制代码
回复文本 → TTS → PCM音频

六、第五步:把AI语音回播给报警人(全网最难)

问题:

怎么把AI合成的声音,送进当前SIP通话?

标准答案:

替换 WebRTC 发送轨道

我们的Hook方法:

ts 复制代码
sipMethods.pushAiPcmToLocalStream(aiPcmData, 16000)

底层实现(精简核心)

ts 复制代码
// 1. AI PCM → 媒体流
const stream = createStreamFromPCM(pcmData)

// 2. 获取PeerConnection
const pc = call.sessionDescriptionHandler.peerConnection

// 3. 获取音频发送器
const sender = pc.getSenders().find(s => s.track.kind === 'audio')

// 4. 替换轨道 → 对方立刻听到AI声音
sender.replaceTrack(stream.getAudioTracks()[0])

效果:

报警人听到的不再是你的麦克风,而是AI语音


七、全流程所有难点 + 企业级解决方案(必看)

难点1:WebRTC无法直接读取远程流

解决方案:Web Audio API 采集

难点2:AI只认16K PCM

解决方案:实时重采样 + 格式转换

难点3:语音识别延迟高

解决方案:流式ASR + 分片发送

难点4:AI声音如何进入通话

解决方案:WebRTC replaceTrack

难点5:麦克风声音与AI声音冲突

解决方案:静音麦克风,只推AI音频

难点6:浏览器必须HTTPS

解决方案:HTTPS部署

难点7:挂断后内存泄漏、崩溃

解决方案:自动停止轨道、销毁流、关闭上下文

难点8:杂音、断音、延迟大

解决方案:固定16K、单声道、关闭回声、优化重采样


八、核心Hook使用完整示例(可直接复制)

1. 初始化通话

ts 复制代码
const localVideoRef = ref(null)
const remoteVideoRef = ref(null)

const [sipState, sipMethods] = useTwSip({
  account: '5013',
  localVideoRef,
  remoteVideoRef
})

const [recorder, recorderMethods] = useMediaRecorder('audio', {
  sampleRate: 16000
})

2. 拨号 + 启动采集

ts 复制代码
const call = async (num) => {
  await sipMethods.login()
  sipMethods.makeCall(num, 'audio')

  // 等待通话接通
  setTimeout(() => {
    // 绑定远程流
    recorderMethods.setStream(sipState.remoteStream)
    // 启动实时PCM推AI
    recorderMethods.startRealTimePCMStream((pcm) => {
      // 发送ASR
    })
  }, 2000)
}

3. AI回播语音

ts 复制代码
// AI返回PCM数据后
sipMethods.pushAiPcmToLocalStream(pcmData, 16000)

4. 挂断

ts 复制代码
const hangup = async () => {
  await sipMethods.hangup()
  recorderMethods.stopRealTimePCMStream()
  recorderMethods.resetRecording()
}

九、系统效果展示

  • ✅ 网页直接拨打/接听SIP电话
  • ✅ 实时显示报警人语音转文字
  • ✅ AI自动回答、自动播报
  • ✅ 警情地址、等级自动提取
  • ✅ 超低延迟、无杂音、稳定可靠
  • ✅ 支持音频/视频通话
  • ✅ 支持会议、转接、保持

十、总结(最重要的3句话)

  1. WebRTC 通过 ontrack 拿到远程语音流
  2. Web Audio 转成16K PCM实时推AI
  3. replaceTrack 把AI语音推回通话,完成全自动交互

这套方案已经在生产环境稳定运行,可直接用于:

  • AI智能接警
  • 智能客服热线
  • 应急指挥对讲
  • 智能语音通知
  • 语音机器人外呼

后续可扩展功能

  • 多方视频会议
  • 通话录制
  • 屏幕共享
  • 电子工单自动生成
  • 警情大屏可视化
  • 方言识别
  • 情绪检测

如果你觉得这篇文章对你有帮助,欢迎点赞、收藏、关注~

后续我会继续更新:WebRTC、Vue3、AI语音、SIP、低延迟通信 实战干货!

以下是源码

  • useTwSip.ts
typescript 复制代码
import { ref, onMounted, onUnmounted } from 'vue';
import TwSip, {
  type eventNameType,
  type callListObjType,
  type loginInfoType,
  type loginOptionType
} from 'tw-sip';
import { closePage, openPage, updatePage } from '@/utils';
import useStorageEvent, { sendStorageEvent } from './useStorageEvent';
import { callRecordAdd } from '@/api/warning/videoFusionAPI';
import {
  RecordingMethods,
  UseMediaRecorderReturn,
  useMediaRecorder
} from './useMediaRecorder';
import { message } from 'ant-design-vue';
import { getPageConfig } from '@/utils/pageConfig';
// 在现有的类型定义中添加
export type RecordingState = 'inactive' | 'recording' | 'paused';

export interface RecordingInfo {
  state: RecordingState;
  startTime: number | null;
  recordedChunks: Blob[];
  mimeType: string;
  duration: number;
}

export interface RecordingOptions {
  mimeType?: string;
  audioBitsPerSecond?: number;
  videoBitsPerSecond?: number;
  bitsPerSecond?: number;
}
export type options = {
  account?: string;
  pass?: string;
  login?: loginInfoType;
  auto?: boolean;
  isMeeting?: boolean;
  mediaConfig?: MediaConfig;
  sipConfig?: SipConfig;
  remoteVideoRef?: any;
  localVideoRef?: any;
  [key: string]: any;
};
// 类型定义

type CallStatus = 'idle' | 'ringing' | 'inCall' | 'onHold' | 'callOut';
type CallStatusText = '空闲' | '振铃' | '呼叫' | '保持';
type LoginStatus = 'loggedOut' | 'loggingIn' | 'loggedIn' | 'error';
export enum CallStatusEnum {
  waiting = 'waiting',
  callIng = 'callIng',
  holdIng = 'holdIng'
}
interface MediaConfig {
  remoteVideoContainer?: string;
  localVideoContainer?: string;
  remoteAudioContainer?: string;
  [key: string]: any;
}

interface SipConfig {
  autoAppendMedia?: boolean;
  queueIds?: string[];
  serviceUrl?: string;
  [key: string]: any;
}
export interface conferencesObj {
  conferenceId: string;
  conferenceName: string;
  active: boolean;
  conferenceCategory: 'default' | 'sd';
}
export type addCall = {
  callerName: string;
  calleeName: string;
  calleeNumber: string;
  calleeOrgId: string;
  calleeOrgName: string;
  eventType: number;
  [key: string]: any;
};
export type TwSipState = {
  statusMapText: CallStatusText;
  twSipOptions: options;
  twSip: Ref<InstanceType<typeof TwSip> | undefined>;
  loginStatus: Ref<LoginStatus>;
  callStatus: Ref<CallStatus>;
  currentCall: Ref<callListObjType | undefined>;
  incomingCall: Ref<
    | {
        callId: string;
        from?: string;
        type?: 'audio' | 'video';
        currentSession?: callListObjType;
      }
    | undefined
  >;
  conferences: Ref<Array<conferencesObj>>;
  currentConferences: Ref<Partial<conferencesObj>>;
  queues: Ref<
    Array<{
      queueId: string;
      queueName: string;
    }>
  >;
  callRecords: Ref<
    Array<{
      callId: string;
      createTime: string;
      phone: string;
      mediaType: 'audio' | 'video';
      type: 'caller' | 'called';
      state: 1 | 2;
      connectTime?: string;
      hangUpTime?: string;
      showBadge: boolean;
    }>
  >;
  fileStream?: Ref<MediaStream | undefined>; //文件流
  localStream: Ref<MediaStream | undefined>;
  remoteStream: Ref<MediaStream | undefined>;
  // audioLocalRecorder: UseMediaRecorderReturn;
  // videoLocalRecorder: UseMediaRecorderReturn;
  // videoRemoteRecorder: UseMediaRecorderReturn;
  // audioRemoteRecorder: UseMediaRecorderReturn;
};

export interface TwSipMethods {
  init: (mediaConfig?: MediaConfig, sipConfig?: SipConfig) => void;
  login: (
    account?: loginInfoType,
    optionsLogin?: loginOptionType
  ) => Promise<{ code: number; info: any; msg: string } | undefined>;
  logout: () => void;
  makeCall: (number: string, type?: 'audio' | 'video', data?: any) => void;
  answerCall: (type?: 'audio' | 'video') => Promise<void>;
  hangup: (callId?: string) => Promise<void>;
  sendDTMF: (msg: string, callId?: string) => void;
  toggleHold: (callId?: string) => Promise<void>;
  blindTransfer: (targetNumber: string, callId?: string) => void;
  attendedTransfer: (targetSession: any, callId?: string) => void;
  monitorCall: (monitorObj: {
    listenerExtNumber: string;
    monitorMode: number;
    monitoredExtNumber: string;
  }) => Promise<any>;
  forceHangup: (uuid: string) => Promise<any>;
  fetchQueues: () => Promise<Array<{ queueId: string; queueName: string }>>;
  createConference: (
    name: string,
    category?: 'default' | 'sd',
    data?: any
  ) => Promise<{ conferenceId: string; conferenceName: string } | undefined>;
  inviteToConference: (
    conferenceId: string,
    extIds: string[],
    moderator: string,
    data?: any
  ) => Promise<any>;

  fetchConferences: (
    name?: string,
    isDynamic?: boolean,
    type?: 'default' | 'sd'
  ) => Promise<any>;
  endConference: (conferenceId: string) => Promise<any>;
  deleteConference: (conferenceId: string) => Promise<any>;
  callRecordAddApi: (data: addCall) => Promise<any>;
  replaceMediaWithFile: (callId?: string) => Promise<void>;
  switchVideo: (IsFile: any) => Promise<void>;
  replaceTrackSafe: (
    newStream: MediaStream,
    callId: string,
    IsFile: boolean
  ) => Promise<void>;
  pushAiPcmToLocalStream: (
    pcmData: ArrayBuffer,
    callId?: string
  ) => Promise<void>;
  replaceStreamWithFile?: (
    file: File,
    replaceLocal: boolean,
    replaceRemote: boolean
  ) => Promise<void>;
  disableLocalVideo: (disable: boolean, callId?: string | undefined) => void;
}
export type { MediaConfig, SipConfig, CallStatus };
export type audioRecorder = UseMediaRecorderReturn;
export type videoRecorder = UseMediaRecorderReturn;
export function useTwSip(options?: options): [TwSipState, TwSipMethods] {
  const loginData = {
    account: '', // 号码
    pass: '', // 密码
    ip: '', // ip
    server: '' // wss服务
  };
  const [mapOrgType, setMapOrgType] = useStorageEvent('mapOrgType'); //跨页面通讯可删除对应操作
  const twSipOptions: options = reactive({
    remoteVideoContainer: '#remoteVideo',
    localVideoContainer: '#localVideo',
    remoteAudioContainer: '#remoteAudioBox',
    login: {
      ip: loginData.ip,
      server: loginData.server,
      authorizationUsername: options?.account || loginData.account || '',
      authorizationPassword: options?.pass || loginData.pass || '',
      aor: `sip:${options?.account || loginData.account}@${loginData.ip}`
    },
    fileStream: null,
    stateCall: '',
    reconnect: 3,
    auto: true,
    ...options
  });

  console.log(twSipOptions.login, '  twSipOptions.login');

  // 响应式状态
  const twSip = ref<InstanceType<typeof TwSip>>();
  const loginStatus = ref<LoginStatus>('loggedOut');
  const localStream = ref<MediaStream | undefined>();

  const fileStream = ref<MediaStream | undefined>();
  const remoteStream = ref<MediaStream | undefined>();
  const callStatus = ref<CallStatus>('idle');
  const currentCall = ref<callListObjType>();
  const currentConferences = ref<Partial<conferencesObj>>({});
  const incomingCall = ref<{
    callId: string;
    from?: string;
    type?: 'audio' | 'video';
    otherSession?: any;
    currentSession?: any;
  }>();
  const conferences = ref<any[]>([]);
  const queues = ref<any[]>([]);
  const callRecords = ref<any[]>([]);
  const statusMap: any = {
    1: 'ringing',
    2: 'inCall',
    3: 'onHold'
  };
  const statusMapText: any = {
    idle: '空闲',
    ringing: '振铃',
    inCall: '通话中',
    onHold: '保持'
  };
  const statusArr = ['idle', 'ringing', 'inCall', 'onHold'];
  // 添加用于图片刷新的计时器引用
  let imageCanvas: HTMLCanvasElement | null = null;
  let videoRef: HTMLVideoElement | null = null;
  let imageCtx: CanvasRenderingContext2D | null = null;
  let imageRefreshTimer: ReturnType<typeof setInterval> | null = null;
  const callType = ref<'1' | '2'>('1'); //1-呼入,2-呼出
  // 添加录制功能

  // 事件处理
  const eventHandlers: Record<eventNameType, (e: any) => void> = {
    invite: (e) => {
      console.log(e, 'invite');

      if (twSip.value) {
        incomingCall.value = {
          callId: e.callId,
          from: e.displayName,
          type: e.mediaType,
          currentSession: null
        };

        // console.log(
        //   twSip.value.callList,
        //   ' twSip.value.callList',
        //   incomingCall.value
        // );

        twSip.value.callList?.forEach((sessionInfo: any) => {
          console.log('sessionInfo.id,', sessionInfo);
          console.log('sessionStateCallId.value,', incomingCall.value?.callId);
          callStatus.value = statusMap[sessionInfo.status];
          console.log(callStatus.value, 'callStatus.value');

          if (sessionInfo.id === incomingCall.value?.callId) {
            incomingCall.value!.currentSession = sessionInfo;
            currentCall.value = sessionInfo;
          }
        });
        if (options?.isMeeting) {
          answerCall(currentCall.value?.info?.mediaType);
        }
      }
    },
    sessionStateChange: async ({ state, callId, displayName }) => {
      if (state === 'Establishing') {
        callStatus.value = 'ringing';
        incomingCall.value = {
          callId
        };
      }
      if (state === 'Established') {
        const session = twSip.value?.callList.find((c) => c.id === callId);
        if (session) {
          // session?.localMedia?.stream;
          // 在获取到localStream后调用
          localStream.value = session?.localMedia?.stream;

          remoteStream.value = session?.remoteMedia?.stream;
        }
        callStatus.value = 'inCall';
        currentCall.value = session;
      }
      if (state === 'Terminated') {
        // callStatus.value = 'idle';
        // currentCall.value = undefined;
        // localStream.value?.getTracks().forEach((track) => track.stop());
        // remoteStream.value?.getTracks().forEach((track) => track.stop());
        // localStream.value = undefined;
        // remoteStream.value = undefined;
      }
      console.log('Session状态变化:', state, callId, twSip.value);
    },
    registererStateChange: (state) => {
      console.log('话机状态变化:', state);
      if (state === 'Unregistered') loginStatus.value = 'loggedOut';
      if (state === 'Registered') loginStatus.value = 'loggedIn';
    },
    onDisconnect: (error) => console.error('WebSocket断开:', error),
    onAccept: ({ callId }) => console.log('通话被接听:', callId),
    onReject: ({ callId }) => {
      console.log('通话被拒绝:', callId);
      // callStatus.value = 'idle';
    },
    addtrack: ({ callId, event }) => {
      console.log(event, 'event');

      const track = event.track;
      if (!remoteStream.value) {
        remoteStream.value = new MediaStream();
      }
      remoteStream.value.addTrack(track);

      track.onended = () => {
        if (remoteStream.value) {
          remoteStream.value.removeTrack(track);
        }
      };
    },
    callMonitoring: (data) => {
      if (data.eventType === 'ringing' && incomingCall.value) {
        incomingCall.value.otherSession = data;
      }

      console.log('话务监控数据:', data);
    },
    connect: () => console.log('WebSocket连接成功'),
    onTrying: () => console.log('正在尝试连接...'),
    onremovetrack: () => console.log('媒体流被移除')
  };

  //#region 核心方法实现
  /**
   * 初始化TwSIP实例
   * @param {MediaConfig} mediaConfig 媒体配置
   * @param {SipConfig} sipConfig SIP配置
   */
  const init = (mediaConfig?: MediaConfig, sipConfig?: SipConfig) => {
    try {
      twSip.value = new TwSip({
        mediaConfig: {
          remoteVideoContainer: '#remoteVideo',
          localVideoContainer: '#localVideo',
          remoteAudioContainer: '#remoteAudioBox',
          ...mediaConfig,
          ...twSipOptions.mediaConfig
        },
        sipConfig: {
          autoAppendMedia: true,
          ...sipConfig,
          ...twSipOptions.sipConfig
        }
      });

      console.log('TwSIP实例初始化成功');
    } catch (error) {
      console.error('TwSIP初始化失败:', error);
      throw error;
    }
  };

  //#region 话机操作

  /**
   * SIP话机登录
   * @param {loginInfoType} account 登录凭证
   * @param {loginOptionType} optionsLogin 登录选项
   */

  const login = async (
    account?: loginInfoType,
    optionsLogin?: loginOptionType
  ) => {
    // if (loginStatus.value === 'loggedIn') {
    //   await logout();
    // }
    if (!twSip.value) {
      init();
      setTimeout(() => {
        login();
      }, 500);

      console.error('TwSIP实例未初始化');

      // throw new Error('TwSIP实例未初始化');
      return;
    }

    try {
      loginStatus.value = 'loggingIn';
      const accountOptions: any = {
        ...twSipOptions.login,
        // authorizationUsername: options?.account || '',
        // authorizationPassword: options?.pass || '',
        ...account
      };
      const result = await twSip.value?.login(accountOptions, optionsLogin);
      console.log(result, 'result');
      cleanupEventListeners();
      setTimeout(() => {
        setupEventListeners();
      }, 100);
      if (result?.code === 200) {
        loginStatus.value = 'loggedIn';

        return result;
      } else {
        loginStatus.value = 'error';
        throw new Error(`登录失败: ${result?.msg}`);
      }
      // setTimeout(async () => {}, 1000);
    } catch (error) {
      loginStatus.value = 'error';
      console.error('登录出错:', error);
      throw error;
    }
  };

  /**
   * SIP话机登出
   */
  const logout = async () => {
    if (twSip.value && incomingCall.value?.callId) return;

    try {
      twSip.value?.loginOut();
      loginStatus.value = 'loggedOut';
      console.log('话机已登出');
    } catch (error) {
      console.error('登出失败:', error);
      throw error;
    }
  };

  /**
   * 发起通话
   * @param {string} number 目标号码
   * @param {'audio'|'video'} type 通话类型
   */
  const makeCall = (
    number: string,
    type: 'audio' | 'video' = 'audio',
    data?: any
  ) => {
    if (!twSip.value) throw new Error('TwSIP实例未初始化');
    if (callStatus.value !== 'idle') throw new Error('当前存在进行中的通话');

    try {
      callStatus.value = 'ringing';
      callType.value = '2';
      twSip.value.sendCall({
        phone: number,
        type,
        errFn: (error) => {
          // callStatus.value = 'idle';
          message.warn(`呼叫失败: ${error}`);
          throw new Error(`呼叫失败: ${error}`);
        },
        acceptFn: ({ callId }) => {
          currentCall.value = twSip.value?.callList.find(
            (c) => c.id === callId
          );
          callStatus.value = 'inCall';
        }
      });
      twSipOptions.stateCall = 'callOut';
    } catch (error) {
      // callStatus.value = 'idle';
      console.error('呼叫异常:', error);
      throw error;
    }
  };

  /**
   * 接听来电
   * @param {'audio'|'video'} type 应答类型
   */
  const answerCall = async (type: 'audio' | 'video' = 'audio') => {
    if (!twSip.value) throw new Error('TwSIP实例未初始化');
    if (!incomingCall.value) throw new Error('当前无来电');

    try {
      callType.value = '1';
      await twSip.value.answerCall({
        callId: incomingCall.value.callId,
        type
      });

      console.log(currentCall.value, 'currentCall.value');
      // incomingCall.value = undefined;
      twSipOptions.stateCall = 'inCall';
      callStatus.value = 'inCall';
    } catch (error) {
      // callStatus.value = 'idle';
      console.error('接听失败:', error);
      throw error;
    }
  };

  /**
   * 挂断通话
   * @param {string} callId 通话ID
   */
  const hangup = async (callId?: string) => {
    if (!twSip.value) throw new Error('TwSIP实例未初始化');

    const targetCallId =
      callId || currentCall.value?.id || incomingCall.value?.callId;
    if (!targetCallId) {
      callStatus.value = 'idle';
      throw new Error('无效的通话ID');
    }

    try {
      await twSip.value.hangupCall(targetCallId);
      callStatus.value = 'idle';
      localStream.value?.getTracks().forEach((track) => track.stop());
      remoteStream.value?.getTracks().forEach((track) => track.stop());
      localStream.value = undefined;
      remoteStream.value = undefined;
      currentCall.value = undefined;
      incomingCall.value = undefined;
      twSipOptions.stateCall = undefined;
    } catch (error) {
      console.error('挂断失败:', error);
      throw error;
    }
  };

  /**
   * 发送DTMF信号
   * @param {string} msg DTMF字符
   * @param {string} callId 通话ID
   */
  const sendDTMF = (msg: string, callId?: string) => {
    if (!twSip.value) throw new Error('TwSIP实例未初始化');

    const targetCallId = callId || currentCall.value?.id;
    if (!targetCallId) throw new Error('无效的通话ID');

    try {
      twSip.value.sendInfo({
        msg,
        callId: targetCallId
      });
    } catch (error) {
      console.error('发送DTMF失败:', error);
      throw error;
    }
  };

  /**
   * 切换通话保持状态
   * @param {string} callId 通话ID
   */
  const toggleHold = async (callId?: string) => {
    if (!twSip.value) throw new Error('TwSIP实例未初始化');

    const targetCallId = callId || currentCall.value?.id;
    if (!targetCallId) throw new Error('无效的通话ID');

    try {
      await twSip.value.sendHold(targetCallId);
      const currentState = currentCall.value?.status;
      callStatus.value =
        currentState === CallStatusEnum.holdIng ? 'onHold' : 'inCall';
    } catch (error) {
      console.error('保持/恢复失败:', error);
      throw error;
    }
  };
  //#endregion

  //#region 呼叫转移
  /**
   * 盲转呼叫
   * @param {string} targetNumber 目标号码
   * @param {string} callId 通话ID
   */
  const blindTransfer = (targetNumber: string, callId?: string) => {
    if (!twSip.value) throw new Error('TwSIP实例未初始化');

    const targetCallId = callId || currentCall.value?.id;
    if (!targetCallId) throw new Error('无效的通话ID');

    try {
      twSip.value.sendRefer({
        callId: targetCallId,
        number: targetNumber,
        referType: 'blind'
      });
      callStatus.value = 'idle';
    } catch (error) {
      console.error('盲转失败:', error);
      throw error;
    }
  };

  /**
   * 协商转移呼叫
   * @param {any} targetSession 目标会话
   * @param {string} callId 通话ID
   */
  const attendedTransfer = (targetSession: any, callId?: string) => {
    if (!twSip.value) throw new Error('TwSIP实例未初始化');

    const targetCallId = callId || currentCall.value?.id;
    if (!targetCallId) throw new Error('无效的通话ID');

    try {
      twSip.value.sendRefer({
        callId: targetCallId,
        referType: 'attended',
        toSession: targetSession
      });
      callStatus.value = 'idle';
    } catch (error) {
      console.error('协商转移失败:', error);
      throw error;
    }
  };
  //#endregion
  //#region 队列管理
  /**
   * 获取队列列表
   */
  const fetchQueues = async () => {
    if (!twSip.value) throw new Error('TwSIP实例未初始化');

    try {
      const result = await twSip.value.queue.getList();
      queues.value = result || [];
      return queues.value;
    } catch (error) {
      console.error('获取队列失败:', error);
      throw error;
    }
  };
  //#endregion
  //#region 话务管理
  /**
   * @description 监听指定分机通话
   * @param {Object} monitorObj 监控配置对象
   * @param {string} monitorObj.listenerExtNumber 监听者分机号
   * @param {number} monitorObj.monitorMode 监听模式(1-普通,2-密语,3-强插)
   * @param {string} monitorObj.monitoredExtNumber 被监听分机号
   * @returns {Promise} 返回操作结果
   */
  const monitorCall = (monitorObj: {
    listenerExtNumber: string;
    monitorMode: number;
    monitoredExtNumber: string;
  }) => {
    return twSip.value?.callManager.monitor(monitorObj);
  };
  /**
   * @description 强制挂断指定UUID的通话
   * @param {string} uuid - 被叫方通话的唯一标识
   * @returns {Promise} 返回操作结果
   */
  const forceHangup = (uuid: string) => {
    return twSip.value?.callManager.forcedHangUp(uuid);
  };
  //#endregion
  //#region 会议管理
  /**
   * @description: 获取会议列表
   * @return {*}
   */
  const fetchConferences = async (
    name?: string,
    isDynamic = true,
    type?: string
  ) => {
    if (!twSip.value) throw new Error('TwSIP实例未初始化');

    try {
      const result = await twSip.value.conferences.getList();
      conferences.value = result || [];

      if (
        isDynamic ||
        mapOrgType.value?.warningObj?.ajlxmc ||
        currentConferences.value?.conferenceName ||
        name
      ) {
        currentConferences.value = conferences.value.find(
          (item: conferencesObj) =>
            item.conferenceName == name ||
            item.conferenceName == currentConferences.value?.conferenceName ||
            item.conferenceName == mapOrgType.value?.warningObj?.ajlxmc
        );
      }
      if (!isDynamic) {
        const arr = conferences.value.filter((item) => {
          const isNumber = Number.isInteger(Number(item.conferenceName));
          console.log(
            item.active == 'false',
            isNumber,
            item.conferenceCategory,
            type,
            'conferences'
          );
          return (
            item.active == 'false' &&
            isNumber &&
            item.conferenceCategory == type
          );
        });
        currentConferences.value =
          arr.length > 0 ? arr[0] : conferences.value[1];
      }
      console.log(currentConferences.value, 'conferences');

      return [currentConferences.value, conferences.value];
    } catch (error) {
      console.error('获取会议列表失败:', error);
      throw error;
    }
  };

  /**
   * @description 创建新会议
   * @param {string} name - 会议名称
   * @param {'default'|'sd'} [category='default'] - 会议类型,默认语音会议, sd (标清视频会议)
   * @returns {Promise<{ conferenceId: string; conferenceName: string }>} 返回会议ID和名称
   */
  const createConference = async (
    name: string,
    category: 'default' | 'sd' = 'default',
    data?: any
  ): Promise<{ conferenceId: string; conferenceName: string }> => {
    const result = await twSip.value?.conferences.create({
      conferenceCategory: category,
      conferenceName: name
    });
    currentConferences.value = result;
    console.log(result, 'createConference');
    // if (result) {
    //   const res = await fetchConferences(); // 创建后刷新列表
    //   console.log(res, 'fetchConferences');
    // }

    return result;
  };
  /**
   * @description 邀请分机加入会议
   * @param {string} conferenceId - 目标会议ID
   * @param {string[]} extIds - 被邀请分机号数组
   * @returns {Promise} 返回操作结果
   */
  const inviteToConference = async (
    conferenceId: string,
    extIds: string[],
    moderator: string,
    data?: any
  ): Promise<any> => {
    const result = await twSip.value?.conferences.invite({
      conferenceId,
      extId: extIds.join(','),
      moderator: moderator || false
    });

    if (result?.code === 200) {
      await fetchConferences(); // 邀请后刷新列表
    }

    return result;
  };
  /**
   * @description 删除会议
   * @param {string} conferenceId - 要删除的会议ID
   * @returns {Promise} 返回操作结果
   */
  const deleteConference = async (conferenceId: string): Promise<any> => {
    const result = await twSip.value?.conferences.delete(conferenceId);
    // if (result) {
    //   await fetchConferences(); // 创建后刷新列表
    // }
    return result;
  };
  /**
   * @description 结束指定会议
   * @param {string} conferenceId - 要结束的会议ID
   * @returns {Promise} 返回操作结果
   */
  const endConference = async (conferenceId: string): Promise<any> => {
    const result = await twSip.value?.conferences.end(conferenceId);
    // if (result) {
    //   await fetchConferences(); // 创建后刷新列表
    // }
    return result;
  };
  //#endregion

  //#region 辅助方法
  /** 设置事件监听 */
  const setupEventListeners = () => {
    if (!twSip.value) return;

    twSip.value.event.removeEventListener();

    (Object.keys(eventHandlers) as eventNameType[]).forEach((eventName) => {
      twSip.value?.event.addEventListener(eventName, (e: any) => {
        eventHandlers[eventName](e);
      });
    });
  };
  // 对现有本地流的音频轨道应用约束
  function applyLocalAudioConstraints(localStream: MediaStream) {
    const audioTrack = localStream.getAudioTracks()[0];
    if (audioTrack) {
      audioTrack
        .applyConstraints({
          channelCount: 1,
          echoCancellation: true,
          noiseSuppression: true,
          autoGainControl: true
        })
        .then(() => {
          console.log('本地音频约束应用成功');
        })
        .catch((err) => {
          console.error('本地音频约束应用失败:', err);
        });
    }
  }

  /**
   * 选择媒体文件(图片或视频)
   */
  const selectMediaFile = (): Promise<File> => {
    return new Promise((resolve) => {
      const fileInput = document.createElement('input');
      fileInput.type = 'file';
      fileInput.accept = 'image/*,video/*';

      fileInput.onchange = (e) => {
        const file = (e.target as HTMLInputElement).files?.[0];
        if (file) {
          twSipOptions.fileStream = file;
          resolve(file);
        }
      };

      fileInput.click();
    });
  };
  /**
   * @description 禁用本地摄像头
   */
  function disableLocalVideo(disable: boolean, callId?: string) {
    const targetCallId = callId || currentCall.value?.id || incomingCall.value;
    if (!targetCallId) throw new Error('无有效通话ID');

    const call: any = twSip.value?.callList.find(
      (c: any) => c.id === targetCallId
    );
    if (!call?.session) throw new Error('会话不可用');
    const pc = call.session['sessionDescriptionHandler']['peerConnection'];
    if (['closed', 'failed'].includes(pc.connectionState)) {
      throw new Error('会话已结束');
    }
    pc.addEventListener('iceconnectionstatechange', () => {
      console.log('ICE连接状态:', pc.iceConnectionState);
    });

    const sender = pc.getSenders().find((s: any) => s.track?.kind === 'video');
    if (sender?.track) {
      sender.track.enabled = !disable; // 暂停视频流(可恢复)
    }
  }

  /**
   * @description 切换视频或摄像头
   * @param {boolean} [IsFile=true] - 是否使用媒体文件
   * @returns {Promise<void>}
   */
  const switchVideo = async (IsFile: boolean = true): Promise<void> => {
    try {
      const file = twSipOptions.fileStream;
      let customStream: MediaStream | any;

      if (IsFile) {
        // 文件模式:图片或视频文件
        if (file.type.startsWith('image/')) {
          // 图片文件:创建静态图片视频流
          customStream = await createImageStream(file);
        } else if (file.type.startsWith('video/')) {
          // 视频文件:创建带同步的视频流
          customStream = await createAudioSyncedStream(file);
        } else {
          throw new Error('不支持的媒体文件类型');
        }
      } else {
        if (videoRef?.src) {
          videoRef.pause();
          URL.revokeObjectURL(videoRef.src);
          videoRef.remove();
        }
        // 摄像头模式
        customStream = await getCameraStream();
      }

      // 保持当前音频轨道(如果需要)
      await replaceTrackSafe(customStream, '', !!IsFile);
    } catch (error) {
      console.error('切换视频源失败:', error);
      // 出错时恢复摄像头
      if (!IsFile) return;

      console.log('尝试恢复摄像头...');
      try {
        const camStream = await getCameraStream();
        await replaceTrackSafe(camStream, '', !!IsFile);
      } catch (fallbackError) {
        console.error('恢复摄像头失败:', fallbackError);
      }
    }
  };

  /**
   * @description 替换视频流
   * @param {MediaStream} stream - 新的视频流
   * @param {string} [callId] - 要替换的通话ID
   * @returns {Promise} 返回操作结果
   */
  //替换视频
  /**
   * @description 替换视频或图片推流
   * @param {string} [callId] 通话ID
   */
  const replaceMediaWithFile = async (callId?: string) => {
    try {
      // 1. 允许用户选择媒体文件(图片或视频)
      const file = await selectMediaFile();
      if (!file) return;

      console.log('Selected file:', file.type, file);

      // 2. 根据文件类型创建不同类型的媒体流
      let customStream: MediaStream;

      if (file.type.startsWith('image/')) {
        // 图片文件:创建静态图片视频流
        customStream = await createImageStream(file);
      } else if (file.type.startsWith('video/')) {
        // 视频文件:创建带同步的视频流
        customStream = await createAudioSyncedStream(file);
      } else {
        throw new Error('不支持的媒体文件类型');
      }

      // 3. 替换当前通话的媒体流
      await replaceTrackSafe(
        customStream,
        callId,
        file.type.startsWith('image/')
      );

      console.log('媒体流替换成功');
    } catch (error) {
      console.error('媒体流替换失败:', error);
      throw error;
    }
  };
  /**
   * 创建图片流(将图片转换为视频流)
   */
  async function createImageStream(file: File): Promise<MediaStream> {
    return new Promise((resolve) => {
      // 创建新的canvas和上下文
      imageCanvas = document.createElement('canvas');
      imageCanvas.width = 640;
      imageCanvas.height = 480;
      imageCtx = imageCanvas.getContext('2d');

      if (!imageCtx) {
        throw new Error('无法创建Canvas上下文');
      }

      const img = new Image();
      img.onload = () => {
        try {
          // 清除之前可能存在的定时器
          if (imageRefreshTimer) {
            clearInterval(imageRefreshTimer);
            imageRefreshTimer = null;
          }

          // 初始绘制
          drawImageToCanvas(img);

          // 创建媒体流 - 只包含视频轨道
          const stream = imageCanvas!.captureStream(10); // 10 FPS

          // 启动定时重绘(10 FPS)
          imageRefreshTimer = setInterval(() => {
            drawImageToCanvas(img);
          }, 100); // 每100毫秒重绘一次

          resolve(stream);
        } catch (error) {
          console.error('图片流创建失败:', error);
          resolve(new MediaStream());
        }
      };

      img.onerror = () => {
        console.error('图片加载失败');
        resolve(new MediaStream());
      };

      img.src = URL.createObjectURL(file);
    });
  }

  /**
   * 在canvas上绘制图片
   */
  function drawImageToCanvas(img: HTMLImageElement) {
    if (!imageCanvas || !imageCtx) return;

    const { width, height } = imageCanvas;

    // 清空画布
    imageCtx.clearRect(0, 0, width, height);

    // 居中绘制图片
    const imgRatio = img.width / img.height;
    const canvasRatio = width / height;

    let drawWidth, drawHeight, offsetX, offsetY;

    if (imgRatio > canvasRatio) {
      // 图片宽度更大
      drawHeight = height;
      drawWidth = height * imgRatio;
      offsetX = (width - drawWidth) / 2;
      offsetY = 0;
    } else {
      // 图片高度更大
      drawWidth = width;
      drawHeight = width / imgRatio;
      offsetX = 0;
      offsetY = (height - drawHeight) / 2;
    }

    imageCtx.drawImage(
      img,
      0,
      0,
      img.width,
      img.height,
      offsetX,
      offsetY,
      drawWidth,
      drawHeight
    );
  }

  //canvas绘制视频流
  async function createAudioSyncedStream(file: File): Promise<MediaStream> {
    const video = document.createElement('video');
    const fileURL = URL.createObjectURL(file);
    if (videoRef?.src) {
      videoRef.pause();
      URL.revokeObjectURL(videoRef.src);
      videoRef.remove();
    }

    video.src = fileURL;
    video.loop = true;
    videoRef = video;
    // 创建音频上下文
    const audioContext = new AudioContext();
    const source = audioContext.createMediaElementSource(video);
    const destination = audioContext.createMediaStreamDestination();
    // source.disconnect();
    source.connect(destination);
    source.connect(audioContext.destination); // 保持本地播放

    return new Promise((resolve, reject) => {
      video.onloadedmetadata = async () => {
        const videoCanvas = document.createElement('canvas');
        const ctx = videoCanvas.getContext('2d')!;

        // 开始绘制循环
        const drawLoop = () => {
          if (video.readyState >= 2) {
            ctx.drawImage(video, 0, 0, videoCanvas.width, videoCanvas.height);
            requestAnimationFrame(drawLoop);
          }
        };

        videoCanvas.width = video.videoWidth;
        videoCanvas.height = video.videoHeight;
        await video.play();
        drawLoop();

        // 创建无音频的纯视频流
        // const videoStream = videoCanvas.captureStream(25);
        // 合并视频流和音频流
        const videoStream = videoCanvas.captureStream();
        const combinedStream = new MediaStream([
          ...videoStream.getVideoTracks(),
          ...destination.stream.getAudioTracks()
        ]);
        resolve(combinedStream);
      };

      video.onerror = () => reject(new Error('视频加载失败'));
    });
  }
  // 获取摄像头媒体流
  async function getCameraStream() {
    try {
      const stream = await navigator.mediaDevices.getUserMedia({
        video: true,
        audio: {
          channelCount: 1, // 单声道(语音识别推荐)
          echoCancellation: true, // 开启回声消除
          noiseSuppression: true, // 开启噪声抑制
          autoGainControl: true // 可选:自动增益控制(辅助优化)
        }
      });
      return stream;
    } catch (error) {
      console.log(`获取摄像头失败: ${error}`);
      throw error;
    }
  }

  /**
   * @description 创建稳定视频流
   * @param {File} file - 源视频文件
   * @returns {Promise<MediaStream>} 创建的稳定视频流
   */
  async function createStableVideoStream(file: File): Promise<MediaStream> {
    const video: any = document.createElement('video');
    const fileURL = URL.createObjectURL(file);
    if (video) {
      video.pause();
      URL.revokeObjectURL(video.src);
    }
    video.src = fileURL;
    video.loop = true;
    video.muted = true;
    video.playsInline = true;
    video.setAttribute('playsinline', '');

    // 确保视频持续播放
    const playPromise = video.play();
    if (playPromise !== undefined) {
      playPromise.catch((e: any) => console.error('播放失败:', e));
    }

    await new Promise<void>((resolve, reject) => {
      video.onloadedmetadata = () => resolve();
      video.onerror = () => reject(new Error('视频加载失败'));
      setTimeout(() => reject(new Error('视频加载超时')), 5000);
    });

    return video.captureStream(25);
  }

  // 替换视频轨道辅助函数
  async function replaceTrackSafe(
    newStream: MediaStream,
    callId?: string,
    IsFile: boolean = true
  ) {
    const targetCallId = callId || currentCall.value?.id;
    if (!targetCallId) throw new Error('无有效通话ID');

    const call: any = twSip.value?.callList.find(
      (c: any) => c.id === targetCallId
    );
    if (!call?.session) throw new Error('会话不可用');
    const file = twSipOptions.fileStream;
    const pc = call.session['sessionDescriptionHandler']['peerConnection'];
    if (['closed', 'failed'].includes(pc.connectionState)) {
      throw new Error('会话已结束');
    }
    pc.addEventListener('iceconnectionstatechange', () => {
      console.log('ICE连接状态:', pc.iceConnectionState);
    });
    fileStream.value = newStream;
    const newVideoTrack = newStream.getVideoTracks()[0];
    const newAudioTrack = newStream.getAudioTracks()[0];
    if (!newVideoTrack) throw new Error('无效的视频轨道');

    const sender = pc.getSenders().find((s: any) => s.track?.kind === 'video');
    const audioSender = pc
      .getSenders()
      .find((s: any) => s.track?.kind === 'audio');
    if (!sender) console.error('未找到视频发送器');

    try {
      const constraints = {
        width: { ideal: 640 },
        height: { ideal: 480 },
        frameRate: { ideal: 24 }
      };
      await newVideoTrack.applyConstraints(constraints);
      await sender.replaceTrack(newVideoTrack);

      // 替换音频轨道(如果有)
      if (newAudioTrack) {
        console.log('音频轨道替换成功');
        await audioSender.replaceTrack(newAudioTrack);
      } else if (!newAudioTrack && audioSender) {
        audioSender.track.stop();
      }
      if (IsFile) {
        // ===== 添加优化参数 =====
        const params = sender.getParameters();

        if (!params.encodings) {
          params.encodings = [{}];
        }
        // 优化设置
        if (file.type.startsWith('image/')) {
          params.encodings[0].maxBitrate = 500000; // 500 kbps
          params.encodings[0].maxFramerate = 10; // 10 FPS
          params.encodings[0].scaleResolutionDownBy = 1.0;
        } else {
          params.encodings[0].maxBitrate = 1_500_000; // 限制为1.5 Mbps
          params.encodings[0].maxFramerate = 30; // 限制为30 FPS
          params.encodings[0].scaleResolutionDownBy = 1.5; // 降低分辨率
        }

        // 启用RTX重传
        params.degradationPreference = 'maintain-framerate';

        await sender.setParameters(params);

        console.log('发送参数优化完成');
      }

      console.log('视频轨道替换成功');
      // 更新本地视频显示
      if (twSipOptions.localVideoRef) {
        // 注意:这里我们只更新视频轨道,音频轨道不需要在本地播放
        const localStream2 = new MediaStream();
        if (newVideoTrack) localStream2.addTrack(newVideoTrack);
        localStream.value = localStream2;
        // 不添加音频轨道,避免本地回声
        twSipOptions.localVideoRef.srcObject = localStream2;
      }
    } catch (error) {
      console.error('轨道替换错误:', error);
      throw error;
    }
  }

  const pushAiPcmToLocalStream = async (
    pcmData: ArrayBuffer,
    callId?: string,
    sampleRate?: number
  ) => {
    // 检查通话状态,确保在通话中才能发送
    if (callStatus.value !== 'inCall' || !currentCall.value?.id) {
      console.error('当前无有效通话,无法推送AI音频');
      return;
    }

    try {
      const targetCallId = callId || currentCall.value?.id;
      if (!targetCallId) throw new Error('无有效通话ID');
      const call: any = twSip.value?.callList.find(
        (c: any) => c.id === targetCallId
      );
      if (!call?.session) throw new Error('会话不可用');

      const pc = call.session['sessionDescriptionHandler']['peerConnection'];
      if (['closed', 'failed'].includes(pc.connectionState)) {
        throw new Error('会话已结束');
      }
      pc.addEventListener('iceconnectionstatechange', () => {
        console.log('ICE连接状态:', pc.iceConnectionState);
      });

      // 1. 初始化音频上下文(复用现有采样率)
      const audioContext = new AudioContext({ sampleRate: sampleRate });

      // 2. 将PCM数据解码为AudioBuffer
      const audioBuffer = await audioContext.decodeAudioData(pcmData);

      // 3. 创建音频源和媒体流目标
      const source = audioContext.createBufferSource();
      source.buffer = audioBuffer;
      const destination = audioContext.createMediaStreamDestination();
      source.connect(destination);

      // 4. 启动音频源(确保数据可被捕获)
      source.start(0);

      // 5. 生成包含AI音频的新媒体流
      const aiAudioStream = destination.stream;
      // const newVideoTrack = aiAudioStream.getVideoTracks()[0];
      const newAudioTrack = aiAudioStream.getAudioTracks()[0];
      if (!newAudioTrack) throw new Error('无效的音频轨道');
      // 6. 调用现有安全替换轨道方法,替换本地流的音频轨道
      const audioSender = pc
        .getSenders()
        .find((s: any) => s.track?.kind === 'audio');
      // 替换音频轨道(如果有)
      if (newAudioTrack) {
        console.log('音频轨道替换成功');
        await audioSender.replaceTrack(newAudioTrack);
      } else if (!newAudioTrack && audioSender) {
        audioSender.track.stop();
      }
      // await replaceTrackSafe(aiAudioStream, currentCall.value.id, false);

      // 7. 更新localStream引用(可选,确保状态同步)
      localStream.value = aiAudioStream;

      console.log('AI合成音频已替换到本地流,开始向远端发送');
    } catch (error) {
      console.error('替换本地流音频轨道失败:', error);
      throw error;
    }
  };

  //#endregion
  /** 清理事件监听 */
  const cleanupEventListeners = () => {
    if (!twSip.value) return;
    (Object.keys(eventHandlers) as eventNameType[]).forEach((eventName) => {
      twSip.value?.event.removeEventListener(eventName);
    });
  };
  //#endregion
  watch(
    () => incomingCall.value?.callId,
    (val) => {
      // console.log(
      //   callStatus.value,
      //   'currentCallinfo',
      //   currentCall.value?.info,
      //   val
      // );
      YW.api.send('YW.popup.getCustomPopup', (res: { data: any }) => {
        // console.log(res, 'getCustomPopup');
        const isOpenPage = res.data.some((val: { id: string }) =>
          ['callTalk', 'resourceInfo'].includes(val.id)
        );

        if (!isOpenPage) {
          if (
            callStatus.value === 'ringing' &&
            currentCall.value?.info?.mediaType == 'audio'
          ) {
            updatePage(
              `dutyVoiceCall`,
              {},
              {
                visible: true,
                ...getPageConfig('dutyVoiceCall')
              }
            );
          }
          if (
            callStatus.value === 'ringing' &&
            currentCall.value?.info?.mediaType == 'video'
          ) {
            updatePage(
              `dutyVideoCall`,
              {},
              {
                visible: true,
                ...getPageConfig('dutyVideoCall')
              }
            );
          }
        }
      });
    }
  );

  async function callRecordAddApi(data: addCall) {
    const targetCallId = currentCall.value?.id || incomingCall.value?.callId;
    const callId =
      data.eventType == 1
        ? twSipOptions.login?.authorizationUsername
        : currentConferences.value.conferenceId;

    if (data.eventType == 2) {
      callType.value = '2';
    }
    if (data && callId) {
      const data2 = {
        callId,
        callerNumber: twSipOptions.login?.authorizationUsername,
        caseId: mapOrgType.value?.warningObj?.ajid || '',
        callType: callType.value,

        ...data,
        eventType: currentCall.value?.info?.mediaType == 'video' ? 2 : 1
      };
      console.log(data2, '添加通话记录');

      return await callRecordAdd(data2);
    }
  }
  function clearCover() {
    logout();
  }
  // 生命周期
  onMounted(() => {
    window.addEventListener('unload', clearCover);
    if (twSipOptions.auto && !twSip.value) {
      init();
      login();
    }
  });
  watch(
    () => localStream.value,
    async (newStream) => {
      // 自动播放媒体流

      if (currentCall.value?.localMedia?.stream && twSipOptions.localVideoRef) {
        twSipOptions.localVideoRef!.srcObject =
          currentCall.value.localMedia.stream;
      }
    }
  );
  watch(
    () => remoteStream.value,
    async (newStream) => {
      console.log(
        currentCall.value?.remoteMedia?.stream && twSipOptions.remoteVideoRef,
        'newStream'
      );

      if (
        currentCall.value?.remoteMedia?.stream &&
        twSipOptions.remoteVideoRef
      ) {
        twSipOptions.remoteVideoRef!.srcObject =
          currentCall.value?.remoteMedia.stream;

        const res = await twSip.value?.conferences.getMembers(
          currentConferences.value?.conferenceId
        );
        console.log(res, 'layoutSet');

        twSip.value?.conferences.layoutSet({
          conferenceId: currentConferences.value?.conferenceId,
          layout: res.length
        });
      }
    }
  );
  // 媒体流处理
  watch(
    () => callStatus.value,
    async (status) => {
      if (status === 'inCall' && currentCall.value) {
        // 如果是视频通话,开始视频录制
        if (currentCall.value?.info?.mediaType === 'video') {
        }
        // 如果是音频通话,开始音频录制
        else if (currentCall.value?.info?.mediaType === 'audio') {
        }
      } else if (status === 'idle') {
        // // 通话结束,停止所有录制
      }
    }
  );
  watch(loginStatus, () => {
    if (loginStatus.value === 'error' && twSipOptions.reconnect > 0) {
      twSipOptions.reconnect = twSipOptions.reconnect - 1;
      console.log('重连中...', twSipOptions.reconnect);
      // login();
    }
  });

  // 在组件卸载时清理资源
  onUnmounted(() => {
    if (imageRefreshTimer) {
      clearInterval(imageRefreshTimer);
      imageRefreshTimer = null;
    }

    // 清理canvas资源
    if (imageCanvas) {
      try {
        imageCanvas.remove();
      } catch (e) {
        console.error('清理Canvas失败:', e);
      }
      imageCanvas = null;
    }

    imageCtx = null;
  });

  // 返回状态和方法
  const state = {
    twSip,
    statusMapText,
    twSipOptions,
    loginStatus,
    callStatus,
    currentCall,
    incomingCall,
    conferences,
    currentConferences,
    queues,
    callRecords,
    localStream,
    remoteStream
  };

  const methods = {
    init,
    login,
    logout,
    makeCall,
    answerCall,
    hangup,
    sendDTMF,
    toggleHold,
    blindTransfer,
    attendedTransfer,
    monitorCall,
    forceHangup,
    fetchQueues,
    createConference,
    inviteToConference,
    fetchConferences,
    endConference,
    deleteConference,
    callRecordAddApi,
    replaceMediaWithFile,
    switchVideo,
    disableLocalVideo,
    replaceTrackSafe,
    pushAiPcmToLocalStream
  };

  return [state, methods];
}

useTwsipAI.ts

typescript 复制代码
import {
  ref,
  reactive,
  Ref,
  nextTick,
  watch,
  onMounted,
  onUnmounted
} from 'vue';

import {
  Message,
  SpeechMethods,
  SpeechOptions,
  SpeechState,
  UserType,
  WebSocketMessage
} from './useSpeechCustom';
import { storages } from '@/utils/storage';
import { StorageKey } from '@/enum/storage';
import { generateUUID } from './useSpeechCustom';
// @ts-ignore
import PCMAudioPlayer from '/public/audio_player.js';
import { gdPoiQuery } from '@/api/cim';
import { getAliyunToken, runAiAgent } from '@/api/aiAgent';
import { AiConfigKeys } from '@/utils/baseConfig';
import { TwSipMethods, TwSipState, useTwSip } from './useTwSip';
import { useMediaRecorder } from './useMediaRecorder';

interface UseTwsipAIOptions extends SpeechOptions {
  account?: string;
  pass?: string;
}

interface aiState extends TwSipState, SpeechState {
  isAiActive: boolean;
  isProcessing: boolean;
  callDirection: 'inbound' | 'outbound';
  autoAnswer: boolean;
}

interface aiMethods extends Partial<TwSipMethods>, Partial<SpeechMethods> {
  initAiCallSystem: () => Promise<void>;
  makeAiCall: (number: string) => Promise<void>;
  answerAiCall: () => Promise<void>;
  hangupAiCall: () => Promise<void>;
  resetAiCallSystem: () => void;
  startConversation: () => Promise<void>;
  stopConversation: () => void;
  toggleRecording: () => Promise<void>;
  clearAudio: () => void;
  connect: () => Promise<void>;
  disconnect: () => void;
  reconnect: () => Promise<void>;
  reset: () => void;
}

// 远端用户说话 → 语音识别 → AI处理 → 处理结果 → 语音合成 → 播放给远端用户
export function useTwsipAI(options: UseTwsipAIOptions): [aiState, aiMethods] {
  const sampleRate = options.sampleRate || 16000;
  const bufferSize = options.bufferSize || 2048;
  let audioContext: AudioContext | null = null;
  let abortController: AbortController | null = null;
  let player = new PCMAudioPlayer(24000);

  // 初始化useTwSip
  const [twSipState, twSipMethods] = useTwSip({
    account: options?.account,
    pass: options?.pass,
    auto: false // 禁用自动登录,由我们控制
  });

  // 添加录制功能
  const [audioRemoteRecorder] = useMediaRecorder('audio', {});
  const [audioLocalRecorder] = useMediaRecorder('audio', {});

  // 引用聊天列表容器
  const chatListRef = options?.chatListRef || ref<HTMLElement | null>(null);

  // 状态管理
  const state = ref<SpeechState>({
    msgList: [],
    isThinking: false,
    isRecording: false,
    transcript: '',
    finalTranscript: '',
    isSynthesizing: false,
    isPlaying: false,
    aiResObject: null,
    audioData: [],
    recognition: {
      socket: null,
      status: 'disconnected',
      taskId: '',
      error: null,
      logs: []
    },
    synthesis: {
      socket: null,
      status: 'disconnected',
      taskId: '',
      error: null,
      logs: []
    },
    voiceStyle: 'longxiaochun_v2',
    voiceOptions: [
      { value: 'longjiayi_v2', label: '粤语-龙嘉怡' },
      { value: 'longyingtian', label: '龙应甜' },
      { value: 'longxiaoxia_v2', label: '龙小夏' },
      { value: 'longyingmu', label: '龙应沐' },
      { value: 'longanpei', label: '龙安培' },
      { value: 'longxiaochun_v2', label: '龙安柔' }
    ],
    isConnected: false,
    isSynthesisConnected: false,
    statusText: '未连接',
    synthesisStatusText: '未连接'
  });

  // AI通话状态管理
  const aiCallState = reactive({
    isAiActive: false,
    isProcessing: false,
    callDirection: 'outbound' as 'inbound' | 'outbound',
    autoAnswer: true // 自动接听来电
  });

  // 防重复触发标记:避免多次启动对话
  const hasStartedConversation = ref(false);

  // WS连接防重入锁:避免短时间内重复建立连接
  const connectingLock = {
    recognition: false,
    synthesis: false
  };

  // WebSocket连接管理器
  const connectWebSocket = async (
    type: 'recognition' | 'synthesis',
    voiceStyle?: string
  ) => {
    // 防重入:如果正在连接,直接返回
    if (connectingLock[type]) return;
    connectingLock[type] = true;

    try {
      // 验证配置
      if (!options.appkey || !options.token) {
        const errorMsg = '请提供AppKey和Token';
        state.value[type].error = errorMsg;
        throw new Error(errorMsg);
      }

      // 断开现有连接
      if (state.value[type].socket) {
        state.value[type].socket?.close(1000, 'reconnect');
        state.value[type].socket = null;
      }

      // 更新状态
      state.value[type].status = 'connecting';
      state.value[type].error = null;

      // 连接配置
      const endpoints = {
        recognition: `wss://nls-gateway-cn-beijing.aliyuncs.com/ws/v1?token=${options.token}`,
        synthesis: `wss://nls-gateway-cn-beijing.aliyuncs.com/ws/v1?token=${options.token}`
      };

      const taskId = generateUUID();
      state.value[type].taskId = taskId;

      const socket = new WebSocket(endpoints[type]);
      state.value[type].socket = socket;

      if (type === 'synthesis') {
        socket.binaryType = 'arraybuffer';
        player.connect();
        player.stop();
      }

      // 连接成功回调
      socket.onopen = () => {
        state.value[type].status = 'connected';
        // 更新连接状态文本
        if (type === 'recognition') {
          state.value.statusText = '识别服务已连接';
        } else {
          state.value.synthesisStatusText = '合成服务已连接';
        }

        // 发送启动指令
        const startMessages = {
          recognition: {
            header: {
              appkey: options.appkey,
              namespace: 'SpeechTranscriber',
              name: 'StartTranscription',
              task_id: taskId,
              message_id: generateUUID()
            },
            payload: {
              format: 'pcm',
              sample_rate: sampleRate,
              enable_intermediate_result: true,
              enable_punctuation_prediction: true,
              enable_inverse_text_normalization: true,
              enable_voice_detection: true
            }
          },
          synthesis: {
            header: {
              appkey: options.appkey,
              namespace: 'FlowingSpeechSynthesizer',
              name: 'StartSynthesis',
              task_id: taskId,
              message_id: generateUUID()
            },
            payload: {
              voice:
                state.value.voiceStyle || state.value.voiceOptions[4].value,
              format: 'PCM',
              sample_rate: sampleRate,
              volume: 100,
              speech_rate: -200,
              pitch_rate: 0
            }
          }
        };

        socket.send(JSON.stringify(startMessages[type]));
      };

      // 消息处理
      socket.onmessage = async (event) => {
        type === 'recognition'
          ? await handleRecognitionMessage(event)
          : await handleSynthesisMessage(event);
      };

      // 错误处理
      socket.onerror = (error) => {
        state.value[type].status = 'error';
        const errorMsg = `${
          type === 'recognition' ? '语音识别' : '语音合成'
        }连接错误: ${(error as unknown as Error).message}`;
        state.value[type].error = errorMsg;
        console.error(errorMsg);
      };

      // 关闭处理
      socket.onclose = (event) => {
        console.log(
          `${type === 'recognition' ? '识别' : '合成'}WS连接关闭:`,
          event.code,
          event.reason
        );
        // 仅在非主动关闭时更新状态
        if (event.code !== 1000) {
          state.value[type].status = 'disconnected';
          // 自动重连(仅当通话活跃时)
          if (aiCallState.isAiActive && !connectingLock[type]) {
            setTimeout(() => connectWebSocket(type), 2000);
          }
        }
      };
    } catch (error) {
      state.value[type].status = 'error';
      const errorMsg = `连接失败: ${(error as Error).message}`;
      state.value[type].error = errorMsg;
      console.error(errorMsg);
      throw error;
    } finally {
      // 释放锁
      connectingLock[type] = false;
    }
  };

  // 识别消息处理
  const handleRecognitionMessage = async (event: MessageEvent) => {
    try {
      const message: WebSocketMessage = JSON.parse(event.data);
      console.log('识别消息:', message.header.name, message);

      switch (message.header.name) {
        case 'TranscriptionStarted':
          console.log('识别开始,当前通话状态:', twSipState.callStatus.value);
          // 通话已建立时启动录音(避免提前录音)
          if (twSipState.callStatus.value === 'inCall') {
            startRecording();
          }
          break;

        case 'TranscriptionResultChanged':
          state.value.transcript = message.payload.result;
          break;

        case 'SentenceEnd':
          state.value.finalTranscript = message.payload.result;
          // 自动触发AI对话流程
          if (state.value.finalTranscript) {
            addMessage(state.value.finalTranscript, UserType.Send);
            // await startSynthesis(state.value.finalTranscript);
            startSynthesis(state.value.finalTranscript);
          }
          console.log(state.value?.finalTranscript, 'finalTranscript');

          break;

        case 'TaskFailed':
          state.value.recognition.status = 'error';
          state.value.recognition.error = `识别失败: ${message.payload.message}`;
          console.error('识别任务失败:', message.payload.message);
          // 触发重连
          if (aiCallState.isAiActive) {
            // setTimeout(() => connectWebSocket('recognition'), 2000);
          }
          break;
      }
    } catch (error) {
      const errMsg = `解析识别消息失败: ${(error as Error).message}`;
      state.value.recognition.error = errMsg;
      console.error(errMsg);
    }
  };

  // 合成消息处理
  const handleSynthesisMessage = async (event: MessageEvent) => {
    if (typeof event.data === 'string') {
      try {
        const message: WebSocketMessage = JSON.parse(event.data);
        console.log('合成消息:', message.header.name, message);

        switch (message.header.name) {
          case 'SynthesisStarted':
            state.value.isSynthesizing = true;
            break;

          case 'SynthesisCompleted':
            state.value.isSynthesizing = false;
            // 自动播放合成结果(已通过player实时推送,这里仅做状态更新)
            state.value.isPlaying = false;
            break;

          case 'TaskFailed':
            state.value.synthesis.status = 'error';
            state.value.synthesis.error = `合成失败: ${message.payload.message}`;
            state.value.isSynthesizing = false;
            console.error('合成任务失败:', message.payload.message);
            // 触发重连(仅当通话活跃时)
            // if (aiCallState.isAiActive) {
            //   setTimeout(() => connectWebSocket('synthesis'), 2000);
            // }
            break;
        }
      } catch (error) {
        const errMsg = `解析合成消息失败: ${(error as Error).message}`;
        state.value.synthesis.error = errMsg;
        console.error(errMsg);
        // // 优化:仅当连接异常时才重连,避免解析二进制数据误触发
        // if (
        //   state.value.synthesis.status !== 'connecting' &&
        //   ['error', 'disconnected'].includes(state.value.synthesis.status)
        // ) {
        //   setTimeout(() => connectWebSocket('synthesis'), 1000);
        // }
      }
    } else if (event.data instanceof ArrayBuffer) {
      // 处理二进制音频数据(实时推送至播放器和WebRTC流)
      const pcmData = event.data;
      state.value.audioData.push(pcmData);
      player.pushPCM(pcmData);
      // twSipMethods.pushAiPcmToLocalStream?.(pcmData); // 可选链避免报错
    }
  };

  // 初始化音频上下文
  const initAudioContext = async () => {
    if (audioContext) {
      // 检查并激活已存在的 AudioContext
      if (audioContext.state === 'suspended') {
        await audioContext.resume();
      }
      return;
    }

    try {
      audioContext = new (window.AudioContext ||
        (window as any).webkitAudioContext)({
        sampleRate
      });
    } catch (error) {
      const errMsg = `音频初始化失败: ${(error as Error).message}`;
      state.value.recognition.error = errMsg;
      console.error(errMsg);
      throw error;
    }
  };

  // 播放音频(批量播放用,实时播放已通过PCMAudioPlayer处理)
  const playAudio2 = async () => {
    if (state.value.audioData.length === 0) return;

    if (!audioContext) await initAudioContext();

    try {
      state.value.isPlaying = true;

      // 合并音频数据
      const totalLength = state.value.audioData.reduce(
        (sum, buf) => sum + buf.byteLength,
        0
      );
      const merged = new Uint8Array(totalLength);
      let offset = 0;
      state.value.audioData.forEach((buf) => {
        merged.set(new Uint8Array(buf), offset);
        offset += buf.byteLength;
      });

      // 解码并播放
      const audioBuffer = await audioContext!.decodeAudioData(merged.buffer);
      const source = audioContext!.createBufferSource();
      source.buffer = audioBuffer;
      source.connect(audioContext!.destination);
      source.start();

      source.onended = () => {
        state.value.isPlaying = false;
        clearAudio();
      };
    } catch (error) {
      const errMsg = `播放失败: ${(error as Error).message}`;
      state.value.synthesis.error = errMsg;
      console.error(errMsg);
      state.value.isPlaying = false;
    }
  };

  // 开始录音(仅发送音频数据,移除错误的WS操作)
  const startRecording = async () => {
    if (state.value.isRecording) return;

    try {
      audioRemoteRecorder.methods.startRealTimePCMStream(
        (pcmData, timestamp) => {
          // console.log(pcmData, 'pcmData');

          // 仅当识别WS处于打开状态时发送数据
          if (state.value.recognition.socket?.readyState === WebSocket.OPEN) {
            state.value.recognition.socket.send(pcmData.buffer);
          }
        }
      );
      state.value.isRecording = true;
      console.log('录音已启动');
    } catch (error) {
      const errMsg = `启动录音失败: ${(error as Error).message}`;
      state.value.recognition.error = errMsg;
      console.error(errMsg);
    }
  };

  // 开始语音合成
  const startSynthesis = async (text: string) => {
    if (!text) return;

    if (
      !state.value.synthesis.socket ||
      state.value.synthesis.socket.readyState !== WebSocket.OPEN
    ) {
      console.log('合成连接未就绪,尝试重连后合成');
      await connectWebSocket('synthesis');
      // 重连后再次尝试(最多重试1次)
      if (state.value.synthesis.socket?.readyState !== WebSocket.OPEN) {
        state.value.synthesis.error = '合成连接失败,无法进行语音合成';
        return;
      }
    }

    try {
      state.value.isSynthesizing = true;
      state.value.synthesis.status = 'synthesizing';

      const message: WebSocketMessage = {
        header: {
          appkey: options.appkey,
          namespace: 'FlowingSpeechSynthesizer',
          name: 'RunSynthesis',
          task_id: state.value.synthesis.taskId,
          message_id: generateUUID()
        },
        payload: { text }
      };

      console.log('发送合成请求:', message);
      state.value.synthesis.socket.send(JSON.stringify(message));
      // 添加AI回复到聊天列表
      addMessage(text, UserType.Receive);
    } catch (error) {
      const errMsg = `合成失败: ${(error as Error).message}`;
      state.value.synthesis.error = errMsg;
      console.error(errMsg);
      state.value.isSynthesizing = false;
      state.value.synthesis.status = 'connected';
    }
  };

  /**
   * 发送 StopSynthesis 指令 断开合成连接
   */
  const sendStopSynthesis = () => {
    if (state.value.synthesis.socket) {
      if (
        state.value.synthesis.socket.readyState === WebSocket.OPEN &&
        state.value.synthesis.taskId
      ) {
        state.value.synthesis.socket.send(
          JSON.stringify({
            header: {
              message_id: generateUUID(),
              namespace: 'FlowingSpeechSynthesizer',
              name: 'StopSynthesis',
              task_id: state.value.synthesis.taskId,
              appkey: options.appkey
            }
          })
        );
      }
      state.value.synthesis.socket.close(1000, 'stop synthesis');
      state.value.synthesis.socket = null;
      state.value.synthesis.status = 'disconnected';
    }
  };

  // 停止录音
  const stopRecording = () => {
    if (!state.value.isRecording) return;

    try {
      audioRemoteRecorder.methods.stopRealTimePCMStream();
      audioRemoteRecorder.methods.stopRealTimePCMStream();
      // 发送停止指令
      if (
        state.value.recognition.socket?.readyState === WebSocket.OPEN &&
        state.value.recognition.taskId
      ) {
        state.value.recognition.socket.send(
          JSON.stringify({
            header: {
              message_id: generateUUID(),
              namespace: 'SpeechTranscriber',
              name: 'StopTranscription',
              task_id: state.value.recognition.taskId
            }
          })
        );
      }

      state.value.isRecording = false;
      console.log('录音已停止');
    } catch (error) {
      const errMsg = `停止录音失败: ${(error as Error).message}`;
      state.value.recognition.error = errMsg;
      console.error(errMsg);
    }
  };

  // 添加消息
  function addMessage(text: string, userType: UserType) {
    state.value.msgList.push({
      messageId: generateUUID(),
      textMsg: text,
      type: 'text',
      time: new Date().toLocaleTimeString(),
      userType
    });
    scrollToBottom();
  }

  // 滚动到底部
  function scrollToBottom() {
    nextTick(() => {
      if (chatListRef.value) {
        chatListRef.value.scrollTop = chatListRef.value.scrollHeight;
      }
    });
  }

  // 停止对话(清理WS连接和录音)
  const stopConversation = () => {
    stopRecording();
    sendStopSynthesis();
    if (abortController) {
      abortController.abort();
      abortController = null;
    }
    // 重置状态标记
    hasStartedConversation.value = false;
  };

  // 启动对话(建立WS连接)
  const startConversation = async () => {
    try {
      console.log('启动对话,建立识别和合成连接');
      await Promise.all([
        connectWebSocket('recognition'),
        connectWebSocket('synthesis')
      ]);

      // 首次启动时发送默认脚本
      if (state.value.msgList.length == 0) {
        setTimeout(() => {
          startDefaultScript();
        }, 1000);
      }
    } catch (error) {
      console.error('启动对话失败:', error);
      throw error;
    }
  };

  // 默认脚本
  function startDefaultScript(end?: boolean) {
    let text = '您好,深圳119,请问着火了吗?';
    const text2 = [
      '好的消防马上过去了,我这边发一个短信给您,到时候您上传一下图片可以吗?',
      '好的消防已经过去了,谢谢您报警'
    ];

    if (end) {
      text2.forEach((item) => {
        addMessage(item, UserType.Receive);
        startSynthesis(item);
      });
    } else {
      const defaultText = '您好,深圳幺幺九,请问着火了吗?';
      addMessage(text, UserType.Receive);
      startSynthesis(defaultText);
    }
    scrollToBottom();
  }

  /**
   * 初始化AI通话系统
   */
  const initAiCallSystem = async () => {
    try {
      console.log('初始化AI通话系统');
      await twSipMethods.init();
      // 初始化时清理旧连接
      await stopConversation();
      console.log('AI通话系统初始化完成');
    } catch (error) {
      console.error('AI通话系统初始化失败:', error);
      throw error;
    }
  };

  /**
   * 发起AI通话
   * @param phoneNumber 目标电话号码
   */
  const makeAiCall = async (phoneNumber: string) => {
    if (!phoneNumber) throw new Error('请输入目标电话号码');

    try {
      aiCallState.isAiActive = true;
      aiCallState.callDirection = 'outbound';
      // 发起呼叫(仅音频通话)
      await twSipMethods.makeCall(phoneNumber, 'audio');
      console.log('AI外呼已发起,号码:', phoneNumber);
    } catch (error) {
      aiCallState.isAiActive = false;
      console.error('发起AI通话失败:', error);
      throw error;
    }
  };

  /**
   * 接听AI通话
   */
  const answerAiCall = async () => {
    try {
      if (!twSipState.incomingCall) {
        throw new Error('没有未接来电');
      }

      aiCallState.isAiActive = true;
      aiCallState.callDirection = 'inbound';
      // 接听来电(仅音频,避免视频权限问题)
      await twSipMethods.answerCall('audio');
      console.log('AI来电已接听');
    } catch (error) {
      aiCallState.isAiActive = false;
      console.error('接听AI通话失败:', error);
      throw error;
    }
  };

  /**
   * 挂断AI通话
   */
  const hangupAiCall = async () => {
    try {
      console.log('挂断AI通话');
      // 挂断通话
      await twSipMethods.hangup();
      // 停止语音对话和清理资源
      stopConversation();
      audioLocalRecorder.methods.stopRealTimePCMStream();
      audioRemoteRecorder.methods.stopRealTimePCMStream();
      // 关闭所有WS连接
      if (state.value.recognition.socket) {
        state.value.recognition.socket.close(1000, 'call hangup');
        state.value.recognition.socket = null;
      }
      if (state.value.synthesis.socket) {
        state.value.synthesis.socket.close(1000, 'call hangup');
        state.value.synthesis.socket = null;
      }
      // 重置状态
      aiCallState.isAiActive = false;
      aiCallState.isProcessing = false;
      hasStartedConversation.value = false;
      console.log('AI通话已挂断');
    } catch (error) {
      console.error('挂断AI通话失败:', error);
      throw error;
    }
  };

  /**
   * 处理来电(自动接听)
   */
  const handleIncomingCall = async () => {
    try {
      if (
        aiCallState.autoAnswer &&
        twSipState.incomingCall &&
        !aiCallState.isAiActive
      ) {
        await nextTick(); // 等待DOM更新
        await answerAiCall();
      }
    } catch (error) {
      console.error('处理来电失败:', error);
    }
  };

  /**
   * 重置AI通话系统
   */
  const resetAiCallSystem = () => {
    try {
      console.log('重置AI通话系统');
      // 挂断当前通话
      if (aiCallState.isAiActive) {
        hangupAiCall();
      }
      // 重置语音系统
      reset();
      console.log('AI通话系统已重置');
    } catch (error) {
      console.error('重置AI通话系统失败:', error);
    }
  };

  // 切换录音状态
  const toggleRecording = async () => {
    state.value.isRecording ? stopRecording() : await startConversation();
  };

  // 清理音频数据
  const clearAudio = () => {
    state.value.audioData = [];
    player.clear(); // 清空播放器缓存
  };

  // 建立所有WS连接
  const connect = async () => {
    await Promise.all([
      connectWebSocket('recognition'),
      connectWebSocket('synthesis')
    ]);
  };

  // 断开所有WS连接
  const disconnect = () => {
    // 断开识别连接
    if (state.value.recognition.socket) {
      if (
        state.value.recognition.socket.readyState === WebSocket.OPEN &&
        state.value.recognition.taskId
      ) {
        state.value.recognition.socket.send(
          JSON.stringify({
            header: {
              message_id: generateUUID(),
              namespace: 'SpeechTranscriber',
              name: 'StopTranscription',
              task_id: state.value.recognition.taskId
            }
          })
        );
      }
      state.value.recognition.socket.close(1000, 'disconnect');
      state.value.recognition.socket = null;
      state.value.recognition.status = 'disconnected';
    }

    // 断开合成连接
    if (state.value.synthesis.socket) {
      if (
        state.value.synthesis.socket.readyState === WebSocket.OPEN &&
        state.value.synthesis.taskId
      ) {
        state.value.synthesis.socket.send(
          JSON.stringify({
            header: {
              message_id: generateUUID(),
              namespace: 'FlowingSpeechSynthesizer',
              name: 'StopSynthesis',
              task_id: state.value.synthesis.taskId,
              appkey: options.appkey
            }
          })
        );
      }
      state.value.synthesis.socket.close(1000, 'disconnect');
      state.value.synthesis.socket = null;
      state.value.synthesis.status = 'disconnected';
    }

    stopRecording();
  };

  // 重新连接所有WS连接
  const reconnect = async () => {
    await disconnect();
    await connect();
  };

  // 重置语音系统状态
  const reset = () => {
    disconnect();
    state.value.msgList = [];
    state.value.transcript = '';
    state.value.finalTranscript = '';
    state.value.audioData = [];
    state.value.recognition.logs = [];
    state.value.synthesis.logs = [];
    state.value.recognition.error = null;
    state.value.synthesis.error = null;
  };

  // 清理存储(组件卸载时调用)
  const clearStorage = () => {
    storages.remove(StorageKey.aliyunToken);
  };

  // 监听localStream变化,设置录音流
  watch(
    () => twSipState.localStream.value,
    async (newStream) => {
      if (newStream) {
        console.log('本地流已获取,设置录音流');
        audioLocalRecorder.methods.setStream(newStream);
        // 通话建立后自动启动录音(由startConversation统一控制)
      }
    },
    { immediate: false }
  );

  // 监听remoteStream变化,设置远端录音流(可选)
  watch(
    () => twSipState.remoteStream.value,
    async (newStream) => {
      if (newStream) {
        console.log('远端流已获取,设置远端录音流');
        audioRemoteRecorder.methods.setStream(newStream);
      }
    },
    { immediate: false }
  );

  // 精准监听通话状态变化(避免深度监听导致的重复触发)
  watch(
    [() => twSipState.callStatus.value, () => twSipState.incomingCall.value],
    async ([newCallStatus, newIncomingCall]) => {
      console.log('通话状态变化:', { newCallStatus, newIncomingCall });

      // 处理来电
      if (newIncomingCall && !aiCallState.isAiActive) {
        handleIncomingCall();
      }

      // 通话建立:启动对话(仅一次)
      if (newCallStatus === 'inCall') {
        aiCallState.isProcessing = true;
        hasStartedConversation.value = true;
        // 延迟1秒启动,确保流已稳定
        setTimeout(() => {
          startConversation().catch((err) =>
            console.error('通话中启动对话失败:', err)
          );
        }, 1000);
      }

      // 通话结束:清理资源
      if (newCallStatus === 'idle') {
        aiCallState.isProcessing = false;
        // 延迟清理,确保通话完全结束
        setTimeout(() => {
          stopConversation();
        }, 500);
      }
    },
    { immediate: false }
  );

  // 组件挂载时初始化阿里云Token
  onMounted(async () => {
    try {
      console.log('组件挂载,获取阿里云Token');
      const token = await getAliyunToken();
      options.token = token || storages.get(StorageKey.aliyunToken);
      if (!options.token) {
        console.error('未获取到阿里云Token,无法连接语音服务');
        state.value.statusText = '未获取到Token';
      }
      // 初始化AI通话系统(按需调用,外部可手动触发)
      // await initAiCallSystem();
    } catch (error) {
      console.error('挂载时初始化失败:', error);
    }
  });

  // 组件卸载时清理资源
  onUnmounted(() => {
    console.log('组件卸载,清理资源');
    reset();
    clearStorage();
    // 关闭音频上下文
    if (audioContext) {
      audioContext
        .close()
        .catch((err) => console.error('关闭音频上下文失败:', err));
      audioContext = null;
    }
    // 关闭播放器
    player.disconnect();
  });

  // 创建合并后的状态对象(响应式处理)
  const mergedState = {
    ...twSipState,
    ...state.value,
    ...aiCallState
  };

  // 监听state变化,同步到mergedState
  watch(
    state,
    (newState) => {
      Object.assign(mergedState, newState);
    },
    { deep: true }
  );

  // 创建合并后的方法对象
  const mergedMethods: aiMethods = {
    ...twSipMethods,
    startConversation,
    stopConversation,
    toggleRecording,
    clearAudio,
    connect,
    disconnect,
    reconnect,
    reset,
    initAiCallSystem,
    makeAiCall,
    answerAiCall,
    hangupAiCall,
    resetAiCallSystem
  };

  return [mergedState, mergedMethods];
}
  • useMediaRecorder.ts
typescript 复制代码
import { ref, computed, onUnmounted, nextTick, watch } from 'vue';
// @ts-ignore
import PCMAudioPlayer from '/public/audio_player.js';

// 类型定义
export type RecordingState = 'inactive' | 'recording' | 'paused';
export type RecordingType = 'audio' | 'video';
// 新增:PCM流状态类型
export type PCMStreamState = 'inactive' | 'active' | 'paused';

export interface RecordingOptions {
  mimeType?: string;
  audioBitsPerSecond?: number;
  videoBitsPerSecond?: number;
  bitsPerSecond?: number;
  timeSlice?: number;
  sampleRate?: number;
  autoStart?: boolean;
  initialStream?: MediaStream;
  bufferSize?: number;
  [key: string]: any;
}

export interface RecordingInfo {
  state: RecordingState;
  type: RecordingType;
  startTime: number | null;
  duration: number;
  size: number;
  blob: Blob | null;
  hasStream: boolean;
}

export interface PCMConversionOptions {
  sampleRate?: number;
  channels?: number;
}

export interface RecordingMethods {
  startRecording: () => Promise<boolean>;
  pauseRecording: () => boolean;
  resumeRecording: () => boolean;
  stopRecording: () => Promise<Blob | null>;
  downloadRecording: (filename?: string) => void;
  resetRecording: () => void;
  setStream: (stream: MediaStream | undefined, PCMStream?: Boolean) => void;
  getCurrentStream: () => MediaStream | undefined;
  convertToPCM: (
    blob: Blob,
    options?: PCMConversionOptions
  ) => Promise<ArrayBuffer | null>;
  startRealTimePCMStream: (
    callback?: (pcmData: Int16Array, timestamp: number) => void,
    options?: PCMConversionOptions
  ) => Promise<boolean>;
  pauseRealTimePCMStream: () => boolean; // 新增:暂停PCM流
  resumeRealTimePCMStream: () => boolean; // 新增:恢复PCM流
  stopRealTimePCMStream: () => void;
  playPCMWithPlayer: (
    arrayBuffer: ArrayBuffer,
    sampleRate?: number
  ) => Promise<void>;
  playPCMWithPlayerBatch: (
    arrayBuffer: ArrayBuffer[],
    sampleRate?: number
  ) => Promise<void>;
}

export interface UseMediaRecorderReturn {
  recordingState: RecordingState;
  pcmStreamState: PCMStreamState; // 新增:PCM流状态
  recordingInfo: RecordingInfo;
  isRecording: boolean;
  isPaused: boolean;
  recordedChunks: Blob[];
  hasActiveStream: boolean;
  methods: RecordingMethods;
  recordedBlob: Blob | null;
  recordedUrl: string;
  [key: string]: any;
}

// 支持的MIME类型
const SUPPORTED_AUDIO_TYPES = [
  'audio/webm;codecs=opus',
  'audio/webm',
  'audio/ogg;codecs=opus',
  'audio/mp4',
  'audio/mpeg',
  'audio/wav'
];

const SUPPORTED_VIDEO_TYPES = [
  'video/webm;codecs=vp9,opus',
  'video/webm;codecs=vp8,opus',
  'video/webm;codecs=h264,opus',
  'video/webm',
  'video/mp4;codecs=h264,aac',
  'video/mp4'
];

/**
 * 获取支持的MIME类型
 */
function getSupportedMimeType(type: RecordingType): string {
  const types =
    type === 'audio' ? SUPPORTED_AUDIO_TYPES : SUPPORTED_VIDEO_TYPES;
  for (const mimeType of types) {
    if (MediaRecorder.isTypeSupported(mimeType)) {
      return mimeType;
    }
  }
  return type === 'audio' ? 'audio/webm' : 'video/webm';
}

/**
 * 根据MIME类型获取文件扩展名
 */
function getFileExtension(mimeType: string): string {
  const extensions: Record<string, string> = {
    'audio/webm': 'webm',
    'audio/ogg': 'ogg',
    'audio/mp4': 'mp4',
    'audio/mpeg': 'mp3',
    'audio/wav': 'wav',
    'video/webm': 'webm',
    'video/mp4': 'mp4',
    'video/ogg': 'ogv'
  };
  for (const [type, ext] of Object.entries(extensions)) {
    if (mimeType.includes(type)) {
      return ext;
    }
  }
  return 'webm';
}

/**
 * 检查媒体流是否有效
 */
function isStreamActive(stream: MediaStream | undefined): boolean {
  if (!stream) return false;
  const hasActiveTracks = stream
    .getTracks()
    .some((track) => track.readyState === 'live' && track.enabled);
  return hasActiveTracks;
}

// 创建PCMAudioPlayer实例(优化:延迟初始化,避免提前占用资源)
let player: typeof PCMAudioPlayer | null = null;

/**
 * 初始化PCM播放器
 */
const initPCMPlayer = (sampleRate = 16000) => {
  if (!player) {
    player = new PCMAudioPlayer(sampleRate);
    player.connect();
  }
  return player;
};

/**
 * 优化的媒体录制 Hook
 * @param type 录制类型:'audio' | 'video'
 * @param options 录制选项
 */
export function useMediaRecorder(
  type: RecordingType = 'audio',
  options: RecordingOptions = {}
): [UseMediaRecorderReturn, RecordingMethods] {
  // 响应式状态
  const currentStream = ref<MediaStream | undefined>(options.initialStream);
  const recordingState = ref<RecordingState>('inactive');
  const mediaRecorder = ref<MediaRecorder | null>(null);
  const recordedChunks = ref<Blob[]>([]);
  const recordingStartTime = ref<number | null>(null);
  const currentMimeType = ref<string>('');
  const recordedBlob = ref<Blob | null>(null);
  const recordedUrl = ref<string>('');
  const hasActiveStream = ref<boolean>(isStreamActive(options.initialStream));

  // 新增:PCM流相关响应式状态(核心修复点)
  const pcmStreamState = ref<PCMStreamState>('inactive');
  const pcmAudioContext = ref<AudioContext | null>(null);
  const pcmSourceNode = ref<MediaStreamAudioSourceNode | null>(null);
  const pcmProcessorNode = ref<ScriptProcessorNode | AudioWorkletNode | null>(
    null
  );
  const pcmCallback = ref<
    ((pcmData: Int16Array, timestamp: number) => void) | null
  >(null);

  // 默认配置
  const sampleRate = options.sampleRate || 16000;
  const bufferSize = options.bufferSize || 2048;

  // 计算属性
  const isRecording = computed(() => recordingState.value === 'recording');
  const isPaused = computed(() => recordingState.value === 'paused');

  const recordingInfo = computed<RecordingInfo>(() => ({
    state: recordingState.value,
    type,
    startTime: recordingStartTime.value,
    duration: recordingStartTime.value
      ? Math.floor((Date.now() - recordingStartTime.value) / 1000)
      : 0,
    size: recordedBlob.value?.size || 0,
    blob: recordedBlob.value,
    hasStream: hasActiveStream.value
  }));

  // 初始化MIME类型
  currentMimeType.value = options.mimeType || getSupportedMimeType(type);

  /**
   * 清理录制的数据资源
   */
  const cleanupRecordedData = () => {
    if (recordedUrl.value) {
      URL.revokeObjectURL(recordedUrl.value);
      recordedUrl.value = '';
    }
    recordedChunks.value = [];
    recordedBlob.value = null;
  };

  /**
   * 清理PCM流资源(核心修复点)
   */
  const cleanupPCMStream = () => {
    try {
      // 停止处理器节点
      if (pcmProcessorNode.value) {
        if ('disconnect' in pcmProcessorNode.value) {
          pcmProcessorNode.value.disconnect();
        }
        pcmProcessorNode.value = null;
      }
      // 停止源节点
      if (pcmSourceNode.value) {
        pcmSourceNode.value.disconnect();
        pcmSourceNode.value = null;
      }
      // 关闭音频上下文
      if (pcmAudioContext.value) {
        pcmAudioContext.value
          .close()
          .catch((err) => console.error('关闭音频上下文失败:', err));
        pcmAudioContext.value = null;
      }
      // 重置PCM状态
      pcmStreamState.value = 'inactive';
      pcmCallback.value = null;
    } catch (error) {
      console.error('清理PCM流资源失败:', error);
    }
  };

  /**
   * 更新流状态
   */
  const updateStreamStatus = (stream?: MediaStream) => {
    const wasActive = hasActiveStream.value;
    hasActiveStream.value = isStreamActive(stream);

    // 流状态变化时的自动处理
    if (
      !wasActive &&
      hasActiveStream.value &&
      options.autoStart &&
      recordingState.value === 'inactive'
    ) {
      nextTick(() => {
        startRecording();
      });
    }

    // 如果流失效但正在录制,停止录制
    if (
      wasActive &&
      !hasActiveStream.value &&
      recordingState.value !== 'inactive'
    ) {
      console.warn('媒体流失效,停止录制');
      stopRecording();
      cleanupPCMStream(); // 同步清理PCM流
    }

    return hasActiveStream.value;
  };

  /**
   * 初始化MediaRecorder实例
   */
  const initializeRecorder = (): boolean => {
    if (!currentStream.value) {
      console.warn('无法初始化录制器:没有可用的媒体流');
      return false;
    }

    if (!window.MediaRecorder) {
      console.error('您的浏览器不支持MediaRecorder API');
      return false;
    }

    try {
      // 根据录制类型过滤轨道
      let targetStream = currentStream.value;
      if (type === 'audio') {
        const audioTracks = currentStream.value.getAudioTracks();
        if (audioTracks.length === 0) {
          console.warn('媒体流中没有音频轨道');
          return false;
        }
        targetStream = new MediaStream(audioTracks);
      }

      const recorderOptions: MediaRecorderOptions = {
        mimeType: currentMimeType.value,
        audioBitsPerSecond: options.audioBitsPerSecond || 128000,
        videoBitsPerSecond: options.videoBitsPerSecond || 2500000
      };

      // 尝试创建MediaRecorder
      try {
        mediaRecorder.value = new MediaRecorder(targetStream, recorderOptions);
      } catch (error) {
        console.warn('使用指定参数创建MediaRecorder失败,尝试默认设置:', error);
        mediaRecorder.value = new MediaRecorder(targetStream);
        currentMimeType.value = mediaRecorder.value.mimeType;
      }

      // 设置事件处理器
      mediaRecorder.value.ondataavailable = async (event) => {
        if (event.data && event.data.size > 0) {
          recordedChunks.value.push(event.data);
          if (recordedChunks.value.length > 0) {
            recordedBlob.value = new Blob(recordedChunks.value, {
              type: currentMimeType.value
            });
            recordedUrl.value = URL.createObjectURL(recordedBlob.value);
          }
        }
      };

      mediaRecorder.value.onstart = () => {
        recordingState.value = 'recording';
        recordingStartTime.value = Date.now();
        // 录制启动时自动恢复PCM流
        if (pcmStreamState.value === 'paused') {
          resumeRealTimePCMStream();
        }
      };

      mediaRecorder.value.onstop = () => {
        recordingState.value = 'inactive';
        cleanupPCMStream(); // 同步清理PCM流
        if (recordedChunks.value.length > 0) {
          recordedBlob.value = new Blob(recordedChunks.value, {
            type: currentMimeType.value
          });
          recordedUrl.value = URL.createObjectURL(recordedBlob.value);
        }
      };

      mediaRecorder.value.onpause = () => {
        recordingState.value = 'paused';
        pauseRealTimePCMStream(); // 同步暂停PCM流
      };

      mediaRecorder.value.onresume = () => {
        recordingState.value = 'recording';
        resumeRealTimePCMStream(); // 同步恢复PCM流
      };

      mediaRecorder.value.onerror = (event) => {
        console.error('录制错误:', event);
        recordingState.value = 'inactive';
        cleanupRecordedData();
        cleanupPCMStream(); // 出错时清理PCM流
      };

      return true;
    } catch (error) {
      console.error('初始化MediaRecorder失败:', error);
      return false;
    }
  };

  /**
   * 开始录制
   */
  const startRecording = async (): Promise<boolean> => {
    if (recordingState.value !== 'inactive') {
      console.warn('录制正在进行中或已暂停');
      return false;
    }

    // 确保有可用的流
    if (!hasActiveStream.value) {
      console.error('没有可用的活跃媒体流进行录制');
      return false;
    }

    // 初始化或重新初始化录制器
    if (!mediaRecorder.value) {
      if (!initializeRecorder()) {
        return false;
      }
    }

    try {
      const timeSlice = options.timeSlice || (type === 'audio' ? 2000 : 1000);
      mediaRecorder.value!.start(timeSlice);
      return true;
    } catch (error) {
      console.error('开始录制失败:', error);
      // 尝试重新初始化
      mediaRecorder.value = null;
      if (initializeRecorder()) {
        try {
          const timeSlice =
            options.timeSlice || (type === 'audio' ? 2000 : 1000);
          mediaRecorder.value!.start(timeSlice);
          return true;
        } catch (retryError) {
          console.error('重新初始化后开始录制仍然失败:', retryError);
          return false;
        }
      }
      return false;
    }
  };

  /**
   * 暂停录制(同步暂停PCM流)
   */
  const pauseRecording = (): boolean => {
    if (mediaRecorder.value && recordingState.value === 'recording') {
      try {
        mediaRecorder.value.pause();
        return true;
      } catch (error) {
        console.error('暂停录制失败:', error);
        return false;
      }
    }
    return false;
  };

  /**
   * 继续录制(同步恢复PCM流)
   */
  const resumeRecording = (): boolean => {
    if (mediaRecorder.value && recordingState.value === 'paused') {
      try {
        mediaRecorder.value.resume();
        return true;
      } catch (error) {
        console.error('继续录制失败:', error);
        return false;
      }
    }
    return false;
  };

  /**
   * 停止录制
   */
  const stopRecording = async (): Promise<Blob | null> => {
    return new Promise((resolve) => {
      if (!mediaRecorder.value || recordingState.value === 'inactive') {
        resolve(null);
        return;
      }

      const onStopHandler = () => {
        mediaRecorder.value?.removeEventListener('stop', onStopHandler);
        resolve(recordedBlob.value);
      };

      mediaRecorder.value.addEventListener('stop', onStopHandler);

      try {
        mediaRecorder.value.stop();
      } catch (error) {
        console.error('停止录制失败:', error);
        resolve(null);
      }
    });
  };

  /**
   * 下载录制文件
   */
  const downloadRecording = (filename?: string) => {
    if (!recordedBlob.value) {
      throw new Error('没有可下载的录制内容');
    }

    const fileExtension = getFileExtension(currentMimeType.value);
    const timestamp = new Date().toISOString().slice(0, 19).replace(/:/g, '-');
    const downloadFilename =
      filename || `${type}-recording-${timestamp}.${fileExtension}`;

    const downloadLink = document.createElement('a');
    downloadLink.href = recordedUrl.value;
    downloadLink.download = downloadFilename;
    document.body.appendChild(downloadLink);
    downloadLink.click();
    document.body.removeChild(downloadLink);
  };

  /**
   * 重置录制状态
   */
  const resetRecording = () => {
    if (mediaRecorder.value && recordingState.value !== 'inactive') {
      try {
        mediaRecorder.value.stop();
      } catch (error) {
        console.error('停止录制时出错:', error);
      }
    }
    recordingState.value = 'inactive';
    recordingStartTime.value = null;
    mediaRecorder.value = null;
    cleanupRecordedData();
    cleanupPCMStream(); // 重置时清理PCM流
  };

  /**
   * 设置媒体流
   */
  const setStream = async (
    newStream: MediaStream | undefined,
    PCMStream?: Boolean
  ) => {
    const wasRecording = recordingState.value === 'recording';
    const wasPaused = recordingState.value === 'paused';

    if (PCMStream) {
      currentStream.value = newStream;
      return;
    }

    // 如果正在录制,先停止
    if (wasRecording || wasPaused) {
      await stopRecording();
      currentStream.value = newStream;
      updateStreamStatus(newStream);
      mediaRecorder.value = null; // 重置录制器

      // 如果之前是在录制状态,且新流有效,自动重新开始录制
      if (wasRecording && newStream) {
        nextTick(() => startRecording());
      }
    } else {
      currentStream.value = newStream;
      updateStreamStatus(newStream);
      mediaRecorder.value = null; // 重置录制器
    }
  };

  /**
   * 获取当前媒体流
   */
  const getCurrentStream = (): MediaStream | undefined => {
    return currentStream.value;
  };

  /**
   * 开始实时音频流PCM转换(核心修复:完善状态管理和重采样)
   */
  const startRealTimePCMStream = async (
    callback?: (pcmData: Int16Array, timestamp: number) => void,
    options?: PCMConversionOptions
  ): Promise<boolean> => {
    // 1. 状态校验
    if (pcmStreamState.value === 'active') {
      console.warn('PCM流已在运行中');
      return false;
    }
    if (!currentStream.value) {
      console.warn('没有可用的媒体流进行实时PCM转换');
      return false;
    }
    const audioTracks = currentStream.value.getAudioTracks();
    if (audioTracks.length === 0 || audioTracks[0].readyState !== 'live') {
      console.warn('媒体流中无活跃的音频轨道');
      return false;
    }

    try {
      // 2. 清理旧的PCM流
      cleanupPCMStream();

      // 3. 初始化PCM播放器
      const pcmPlayer = initPCMPlayer(sampleRate);

      // 4. 创建音频上下文(处理自动播放策略)
      const AudioContextConstructor =
        window.AudioContext || (window as any).webkitAudioContext;
      pcmAudioContext.value = new AudioContextConstructor({
        sampleRate: sampleRate
      });

      // 恢复暂停的音频上下文(解决浏览器自动播放限制)
      if (pcmAudioContext.value.state === 'suspended') {
        await pcmAudioContext.value.resume();
      }

      // 5. 创建媒体流源节点
      pcmSourceNode.value = pcmAudioContext.value.createMediaStreamSource(
        currentStream.value
      );

      // 6. 创建音频处理器(兼容新旧浏览器)
      if ('createScriptProcessor' in pcmAudioContext.value) {
        // 旧版浏览器:ScriptProcessorNode(已废弃但兼容更好)
        pcmProcessorNode.value = pcmAudioContext.value.createScriptProcessor(
          bufferSize,
          1,
          1
        );
      } else {
        // 新版浏览器:AudioWorklet(推荐)
        // 注意:需要提前加载worklet脚本,这里简化处理
        await pcmAudioContext.value?.audioWorklet?.addModule(
          '../utils/audio-processor.js'
        );
        pcmProcessorNode.value = new AudioWorkletNode(
          pcmAudioContext.value,
          'audio-processor',
          {
            processorOptions: { bufferSize, sampleRate }
          }
        );
      }

      // 7. 保存回调函数
      pcmCallback.value = callback;

      // 8. 音频处理逻辑(修复重采样问题)
      const targetSampleRate = options?.sampleRate || sampleRate;
      const processAudio = (inputData: Float32Array) => {
        // 重采样处理(实际生效)
        const resampledData = resampleSingleChannel(
          inputData,
          pcmAudioContext.value!.sampleRate,
          targetSampleRate
        );

        // 转换为16位PCM
        const pcmData = new Int16Array(resampledData.length);
        for (let i = 0; i < resampledData.length; i++) {
          const sample = Math.max(-1, Math.min(1, resampledData[i]));
          pcmData[i] = sample < 0 ? sample * 0x8000 : sample * 0x7fff;
        }
        // console.log(pcmData, 'pcmData');

        // 推送至播放器
        // pcmPlayer.pushPCM(pcmData.buffer);

        // 执行回调
        if (pcmCallback.value && pcmStreamState.value === 'active') {
          pcmCallback.value(pcmData, Date.now());
        }
      };

      // 9. 绑定处理器事件
      if ('onaudioprocess' in pcmProcessorNode.value) {
        (pcmProcessorNode.value as ScriptProcessorNode).onaudioprocess = (
          event
        ) => {
          // 仅在活跃状态处理数据
          if (pcmStreamState.value !== 'active') return;
          const inputData = event.inputBuffer.getChannelData(0);
          processAudio(inputData);
        };
      } else {
        (pcmProcessorNode.value as AudioWorkletNode).port.onmessage = (
          event
        ) => {
          if (pcmStreamState.value !== 'active' || !event.data.inputData)
            return;
          processAudio(new Float32Array(event.data.inputData));
        };
      }

      // 10. 连接音频链路
      pcmSourceNode.value.connect(pcmProcessorNode.value);
      pcmProcessorNode.value.connect(pcmAudioContext.value.destination);

      // 11. 更新PCM流状态
      pcmStreamState.value = 'active';
      console.log(`实时PCM流已启动,采样率: ${targetSampleRate}`);
      return true;
    } catch (error) {
      console.error('启动实时PCM流转换失败:', error);
      cleanupPCMStream();
      return false;
    }
  };

  /**
   * 暂停实时PCM流(新增核心方法)
   */
  const pauseRealTimePCMStream = (): boolean => {
    if (pcmStreamState.value !== 'active' || !pcmAudioContext.value) {
      console.warn('PCM流未运行或已暂停');
      return false;
    }

    try {
      // 暂停音频上下文
      pcmAudioContext.value.suspend();
      pcmStreamState.value = 'paused';
      console.log('实时PCM流已暂停');
      return true;
    } catch (error) {
      console.error('暂停实时PCM流失败:', error);
      return false;
    }
  };

  /**
   * 恢复实时PCM流(新增核心方法)
   */
  const resumeRealTimePCMStream = (): boolean => {
    if (pcmStreamState.value !== 'paused' || !pcmAudioContext.value) {
      console.warn('PCM流未暂停或未初始化');
      return false;
    }

    try {
      // 恢复音频上下文
      pcmAudioContext.value.resume();
      pcmStreamState.value = 'active';
      console.log('实时PCM流已恢复');
      return true;
    } catch (error) {
      console.error('恢复实时PCM流失败:', error);
      return false;
    }
  };

  /**
   * 停止实时PCM流
   */
  const stopRealTimePCMStream = () => {
    cleanupPCMStream();
    console.log('实时PCM流已停止');
  };

  // 监听初始流的变化
  if (options.initialStream) {
    watch(
      () => options.initialStream,
      (newStream) => {
        if (newStream !== currentStream.value) {
          setStream(newStream);
        }
      },
      { deep: true }
    );
  }

  // 组件卸载时清理所有资源
  onUnmounted(() => {
    resetRecording();
    cleanupPCMStream(); // 清理PCM流
    if (recordedUrl.value) {
      URL.revokeObjectURL(recordedUrl.value);
    }
    // 清理播放器
    if (player) {
      // 假设PCMAudioPlayer有disconnect方法
      if (typeof player.disconnect === 'function') {
        player.disconnect();
      }
      player = null;
    }
  });

  // 方法集合
  const methods = {
    startRecording,
    pauseRecording,
    resumeRecording,
    stopRecording,
    downloadRecording,
    resetRecording,
    setStream,
    getCurrentStream,
    convertToPCM,
    startRealTimePCMStream,
    pauseRealTimePCMStream, // 新增
    resumeRealTimePCMStream, // 新增
    stopRealTimePCMStream,
    playPCMWithPlayer,
    playPCMWithPlayerBatch
  };

  // 状态返回
  const state = {
    recordingState: recordingState.value,
    pcmStreamState: pcmStreamState.value, // 新增PCM流状态
    recordingInfo: recordingInfo.value,
    isRecording: isRecording.value,
    isPaused: isPaused.value,
    hasActiveStream: hasActiveStream.value,
    recordedChunks: recordedChunks.value,
    methods,
    recordedBlob: recordedBlob.value,
    recordedUrl: recordedUrl.value
  };

  return [state, methods];
}

// ===================== 工具函数 =====================

/**
 * 将音频Blob转换为16位PCM格式
 */
async function convertToPCM(
  blob: Blob,
  options: PCMConversionOptions = {}
): Promise<ArrayBuffer | null> {
  try {
    const audioContext = new (window.AudioContext ||
      (window as any).webkitAudioContext)();
    const arrayBuffer = await blob.arrayBuffer();
    const audioBuffer = await audioContext.decodeAudioData(arrayBuffer);

    // 获取参数
    const targetSampleRate = options.sampleRate || 16000;
    const channels = options.channels || 1;

    // 获取源数据
    let sourceData: Float32Array[] = [];
    for (let i = 0; i < audioBuffer.numberOfChannels; i++) {
      sourceData.push(audioBuffer.getChannelData(i));
    }

    // 重采样和声道处理
    let resampledData: Float32Array[];
    if (
      audioBuffer.sampleRate !== targetSampleRate ||
      audioBuffer.numberOfChannels !== channels
    ) {
      resampledData = resampleAudio(
        sourceData,
        audioBuffer.sampleRate,
        targetSampleRate,
        audioBuffer.numberOfChannels,
        channels
      );
    } else {
      resampledData = sourceData;
    }

    // 转换为16位PCM
    const pcmData = floatTo16BitPCM(resampledData);

    // 清理资源
    audioContext.close();
    return pcmData;
  } catch (error) {
    console.error('转换为PCM失败:', error);
    return null;
  }
}

/**
 * 重采样音频数据
 */
function resampleAudio(
  sourceData: Float32Array[],
  sourceSampleRate: number,
  targetSampleRate: number,
  sourceChannels: number,
  targetChannels: number
): Float32Array[] {
  const ratio = sourceSampleRate / targetSampleRate;
  const targetLength = Math.round(sourceData[0].length / ratio);

  const resampledData: Float32Array[] = [];
  // 初始化目标声道数据
  for (let i = 0; i < targetChannels; i++) {
    resampledData.push(new Float32Array(targetLength));
  }

  // 重采样处理
  for (let channel = 0; channel < targetChannels; channel++) {
    const sourceChannel = sourceData[Math.min(channel, sourceChannels - 1)];
    const targetChannel = resampledData[channel];

    for (let i = 0; i < targetLength; i++) {
      const sourceIndex = i * ratio;
      const lowerIndex = Math.floor(sourceIndex);
      const upperIndex = Math.min(
        Math.ceil(sourceIndex),
        sourceChannel.length - 1
      );
      const interpolation = sourceIndex - lowerIndex;
      // 线性插值
      targetChannel[i] =
        sourceChannel[lowerIndex] * (1 - interpolation) +
        sourceChannel[upperIndex] * interpolation;
    }
  }

  return resampledData;
}

/**
 * 单声道重采样(优化精度)
 */
function resampleSingleChannel(
  sourceData: Float32Array,
  sourceSampleRate: number,
  targetSampleRate: number
): Float32Array {
  if (sourceSampleRate === targetSampleRate) {
    return sourceData;
  }

  const ratio = sourceSampleRate / targetSampleRate;
  const targetLength = Math.round(sourceData.length / ratio);
  const targetData = new Float32Array(targetLength);

  // 优化的线性插值
  for (let i = 0; i < targetLength; i++) {
    const sourceIndex = i * ratio;
    const lowerIndex = Math.floor(sourceIndex);
    if (lowerIndex >= sourceData.length - 1) {
      targetData[i] = sourceData[sourceData.length - 1];
      continue;
    }
    const upperIndex = lowerIndex + 1;
    const interpolation = sourceIndex - lowerIndex;
    targetData[i] =
      sourceData[lowerIndex] * (1 - interpolation) +
      sourceData[upperIndex] * interpolation;
  }

  return targetData;
}

/**
 * 浮点数转16位PCM
 */
function floatTo16BitPCM(data: Float32Array[]): ArrayBuffer {
  const totalSamples = data[0].length * data.length;
  const buffer = new ArrayBuffer(totalSamples * 2);
  const view = new DataView(buffer);
  let offset = 0;

  if (data.length === 1) {
    // 单声道
    const channelData = data[0];
    for (let i = 0; i < channelData.length; i++) {
      const s = Math.max(-1, Math.min(1, channelData[i]));
      view.setInt16(offset, s < 0 ? s * 0x8000 : s * 0x7fff, true);
      offset += 2;
    }
  } else {
    // 多声道交错
    for (let i = 0; i < data[0].length; i++) {
      for (let channel = 0; channel < data.length; channel++) {
        const s = Math.max(-1, Math.min(1, data[channel][i]));
        view.setInt16(offset, s < 0 ? s * 0x8000 : s * 0x7fff, true);
        offset += 2;
      }
    }
  }

  return buffer;
}

/**
 * 使用PCMAudioPlayer播放PCM音频
 */
async function playPCMWithPlayer(
  arrayBuffer: ArrayBuffer,
  sampleRate: number = 16000
): Promise<void> {
  try {
    const pcmPlayer = initPCMPlayer(sampleRate);
    pcmPlayer.pushPCM(arrayBuffer);
    console.log('开始使用PCMAudioPlayer播放PCM音频');
  } catch (error) {
    console.error('使用PCMAudioPlayer播放PCM音频失败:', error);
    throw error;
  }
}

/**
 * 批量播放PCM音频
 */
async function playPCMWithPlayerBatch(
  audioData: ArrayBuffer[],
  sampleRate: number = 16000
): Promise<void> {
  try {
    if (audioData.length === 0) {
      console.warn('没有音频数据可播放');
      return;
    }

    // 合并音频数据
    const totalLength = audioData.reduce((sum, buf) => sum + buf.byteLength, 0);
    const merged = new Uint8Array(totalLength);
    let offset = 0;
    audioData.forEach((buf) => {
      merged.set(new Uint8Array(buf), offset);
      offset += buf.byteLength;
    });

    // 使用PCM播放器播放
    const pcmPlayer = initPCMPlayer(sampleRate);
    pcmPlayer.pushPCM(merged.buffer);
    console.log('开始批量播放PCM音频');
  } catch (error) {
    console.error('批量播放PCM音频失败:', error);
    throw error;
  }
}
typescript 复制代码
import baseConfig, { AiConfigKeys } from '@/utils/baseConfig';
import { storages } from '@/utils/storage';
import { ref, computed, onUnmounted, Ref, nextTick, reactive } from 'vue';
// @ts-ignore
import PCMAudioPlayer from '/public/audio_player.js';
import {
  aiAgentCreateSession,
  aiAgentRun,
  getAliyunToken,
  runAiAgent
} from '@/api/aiAgent';
import { StorageKey } from '@/enum/storage';
import { gdPoiQuery } from '@/api/cim';

// 类型定义
export interface SpeechOptions {
  appkey: string;
  token?: string;
  sampleRate?: number;
  bufferSize?: number;
  chatListRef: Ref<HTMLElement>;
  onRecognitionEnd?: (v?: any) => void;
  onSynthesisEnd?: (v?: any) => void;
}

export interface WebSocketMessage {
  header: {
    appkey?: string;
    namespace: string;
    name: string;
    task_id?: string;
    message_id?: string;
  };
  payload?: any;
}

export enum UserType {
  Send = 'send',
  Receive = 'receive'
}

export type Message = {
  messageId: string;
  textMsg: string;
  type: string;
  userType: UserType;
  time?: string;
  end?: boolean;
  [key: string]: any;
};

export interface WebSocketConnection {
  socket: WebSocket | null;
  status:
    | 'disconnected'
    | 'connecting'
    | 'connected'
    | 'error'
    | 'synthesizing';
  taskId: string;
  error: string | null;
  logs: string[];
}

// 整合后的状态接口
export interface SpeechState {
  // 聊天管理
  msgList: Message[];
  isThinking: boolean;
  voiceStyle: string;
  // 语音识别
  isRecording: boolean;
  transcript: string;
  finalTranscript: string;
  aiResObject: any;
  // 语音合成
  isSynthesizing: boolean;
  isPlaying: boolean;
  audioData: ArrayBuffer[];
  voiceOptions: any[];
  // 连接状态
  recognition: WebSocketConnection;
  synthesis: WebSocketConnection;

  // 计算属性
  isConnected: boolean;
  isSynthesisConnected: boolean;
  statusText: string;
  synthesisStatusText: string;
}

// 整合后的方法接口
export interface SpeechMethods {
  // 核心流程控制
  startConversation: () => Promise<void>;
  stopConversation: () => void;
  sendToAIArr: (arr: string, fun?: any) => void;

  // 语音操作
  toggleRecording: () => Promise<void>;
  playAudio: () => Promise<void>;
  clearAudio: () => void;

  // 连接管理
  connect: () => Promise<void>;
  disconnect: () => void;
  reconnect: () => Promise<void>;
  startSynthesis: (value: string) => Promise<void>;
  startRecording: () => Promise<void>;
  stopRecording: () => void;

  setMsgList: (arr: Message[]) => void;

  // 工具方法
  reset: () => void;
  setAiStyle: (value: string) => void;
  logMessage: (message: string, type: 'recognition' | 'synthesis') => void;
}

export type UseSpeechReturn = [Ref<SpeechState>, SpeechMethods];

// 全局类型声明
declare global {
  interface Window {
    webkitAudioContext: typeof AudioContext;
  }
}

/**
 * 整合语音识别、合成与聊天管理的完整Hook
 * 提供端到端的语音交互流程:录音→识别→AI回复→合成→播放
 */
export function useSpeechCustom(options: SpeechOptions): UseSpeechReturn {
  // 默认配置
  const sampleRate = options.sampleRate || 16000;
  const bufferSize = options.bufferSize || 2048;

  // 音频相关实例
  let audioContext: AudioContext | null = null;
  let scriptProcessor: ScriptProcessorNode | null = null;
  let audioInput: MediaStreamAudioSourceNode | null = null;
  let audioStream: MediaStream | null = null;
  let player = new PCMAudioPlayer(24000);
  let abortController: AbortController | null = null;

  // 状态管理
  const state = ref<SpeechState>({
    msgList: [],
    isThinking: false,
    isRecording: false,
    transcript: '',
    finalTranscript: '',
    isSynthesizing: false,
    isPlaying: false,
    aiResObject: null,
    audioData: [],
    recognition: {
      socket: null,
      status: 'disconnected',
      taskId: '',
      error: null,
      logs: []
    },
    synthesis: {
      socket: null,
      status: 'disconnected',
      taskId: '',
      error: null,
      logs: []
    },
    voiceStyle: 'aiting',
    voiceOptions: [
      { value: 'longjiayi_v2', label: '粤语-龙嘉怡' },
      { value: 'longyingtian', label: '龙应甜' },
      { value: 'longxiaoxia_v2', label: '龙小夏' },
      { value: 'longyingmu', label: '龙应沐' },
      { value: 'longanpei', label: '龙安培' },
      { value: 'longxiaochun_v2', label: '龙安柔' }
    ],
    isConnected: false,
    isSynthesisConnected: false,
    statusText: '未连接',
    synthesisStatusText: '未连接'
  });

  // 计算属性 - 统一状态映射
  const statusTextMap = computed(() => ({
    disconnected: '未连接',
    connecting: '连接中...',
    connected: '已连接',
    synthesizing: '合成中',
    error: '错误'
  }));

  // 实时计算连接状态
  const updateComputedStates = () => {
    state.value.isConnected = state.value.recognition.status === 'connected';
    state.value.isSynthesisConnected =
      state.value.synthesis.status === 'connected' ||
      state.value.synthesis.status === 'synthesizing';
    state.value.statusText =
      statusTextMap.value[state.value.recognition.status];
    state.value.synthesisStatusText =
      statusTextMap.value[state.value.synthesis.status];
  };

  // 日志管理
  const logMessage = (message: string, type: 'recognition' | 'synthesis') => {
    const timestamp = new Date().toLocaleTimeString();
    state.value[type].logs.push(`${timestamp}: ${message}`);

    // 限制日志长度
    if (state.value[type].logs.length > 100) {
      state.value[type].logs.shift();
    }
    // console.log(state.value[type].logs, message);
  };

  // WebSocket连接管理器
  const connectWebSocket = async (
    type: 'recognition' | 'synthesis',
    voiceStyle?: string
  ) => {
    // 验证配置
    if (!options.appkey || !options.token) {
      const errorMsg = '请提供AppKey和Token';
      state.value[type].error = errorMsg;
      logMessage(errorMsg, type);
      throw new Error(errorMsg);
    }

    // 断开现有连接
    if (state.value[type].socket) {
      state.value[type].socket?.close();
      state.value[type].socket = null;
    }

    // 更新状态
    state.value[type].status = 'connecting';
    state.value[type].error = null;
    updateComputedStates();

    // 连接配置
    const endpoints = {
      recognition: `wss://nls-gateway-cn-beijing.aliyuncs.com/ws/v1?token=${options.token}`,
      synthesis: `wss://nls-gateway-cn-beijing.aliyuncs.com/ws/v1?token=${options.token}`
    };
    const taskId = generateUUID();
    state.value[type].taskId = taskId;

    try {
      const socket = new WebSocket(endpoints[type]);
      state.value[type].socket = socket;

      if (type === 'synthesis') {
        socket.binaryType = 'arraybuffer';
        player.connect();
        player.stop();
      }

      // 连接成功回调
      socket.onopen = () => {
        state.value[type].status = 'connected';
        updateComputedStates();

        // 发送启动指令
        const startMessages = {
          recognition: {
            header: {
              appkey: options.appkey,
              namespace: 'SpeechTranscriber',
              name: 'StartTranscription',
              task_id: taskId,
              message_id: generateUUID()
            },
            payload: {
              format: 'pcm',
              sample_rate: sampleRate,
              enable_intermediate_result: true,
              enable_punctuation_prediction: true,
              enable_inverse_text_normalization: true,
              enable_voice_detection: true
            }
          },
          synthesis: {
            header: {
              appkey: options.appkey,
              namespace: 'FlowingSpeechSynthesizer',
              name: 'StartSynthesis',
              task_id: taskId,
              message_id: generateUUID()
            },
            payload: {
              voice:
                state.value.voiceStyle || state.value.voiceOptions[4].value,
              format: 'PCM',
              sample_rate: sampleRate,
              volume: 100,
              speech_rate: -200,
              pitch_rate: 0
            }
          }
        };

        socket.send(JSON.stringify(startMessages[type]));
      };

      // 消息处理
      socket.onmessage = async (event) => {
        type === 'recognition'
          ? await handleRecognitionMessage(event)
          : await handleSynthesisMessage(event);
      };

      // 错误处理
      socket.onerror = () => {
        state.value[type].status = 'error';
        state.value[type].error = `${
          type === 'recognition' ? '语音识别' : '语音合成'
        }连接错误`;

        updateComputedStates();
      };

      // 关闭处理
      socket.onclose = (event) => {
        if (state.value[type].status !== 'disconnected') {
          state.value[type].status = 'disconnected';

          updateComputedStates();
        }
      };
    } catch (error) {
      state.value[type].status = 'error';
      state.value[type].error = `连接失败: ${(error as Error).message}`;

      updateComputedStates();
      throw error;
    }
  };

  // 识别消息处理
  const handleRecognitionMessage = async (event: MessageEvent) => {
    try {
      const message: WebSocketMessage = JSON.parse(event.data);
      console.log(message.header.name, 'message');

      switch (message.header.name) {
        case 'TranscriptionStarted':
          // console.log('识别开始')
          // startRecording();
          console.log('识别开始(已主动启动录音,忽略消息触发)');

          await startRecording();
          // 增加连接状态检查

          break;

        case 'TranscriptionResultChanged':
          state.value.transcript = message.payload.result;
          break;

        case 'SentenceEnd':
          state.value.finalTranscript = message.payload.result;

          // stopRecording();
          // 自动触发AI对话流程
          if (state.value.finalTranscript) {
            if (!state.value.isThinking) {
              addMessage(state.value.finalTranscript, UserType.Send);
            }

            options?.onRecognitionEnd &&
              options?.onRecognitionEnd?.(state.value.finalTranscript);
          }
          break;

        case 'TaskFailed':
          state.value.recognition.status = 'error';
          state.value.recognition.error = `识别失败: ${message.payload.message}`;

          updateComputedStates();
          break;
      }
    } catch (error) {
      const errMsg = `解析识别消息失败: ${(error as Error).message}`;
      state.value.recognition.error = errMsg;
    }
  };

  // 合成消息处理
  const handleSynthesisMessage = async (event: MessageEvent) => {
    if (typeof event.data === 'string') {
      try {
        const message: WebSocketMessage = JSON.parse(event.data);

        switch (message.header.name) {
          case 'SynthesisStarted':
            break;

          case 'SynthesisCompleted':
            state.value.synthesis.status = 'connected';
            state.value.isSynthesizing = false;

            // 自动播放合成结果
            playAudio();
            updateComputedStates();
            break;

          case 'TaskFailed':
            state.value.synthesis.status = 'error';
            state.value.synthesis.error = `合成失败: ${message.payload.message}`;
            state.value.isSynthesizing = false;

            updateComputedStates();
            break;
        }
      } catch (error) {
        connectWebSocket('synthesis');
        const errMsg = `解析合成消息失败: ${(error as Error).message}`;
        state.value.synthesis.error = errMsg;
        logMessage(errMsg, 'synthesis');
      }
    } else if (event.data instanceof ArrayBuffer) {
      const data = event.data;
      // 处理二进制音频数据
      state.value.audioData.push(event.data);
      player.pushPCM(data);
      // console.log(state.value.audioData, 'state.value.audioData');
      options?.onSynthesisEnd && options?.onSynthesisEnd?.(data);
      logMessage(`收到音频数据 (${event.data.byteLength} bytes)`, 'synthesis');
    }
  };

  // 初始化音频上下文
  const initAudioContext = async () => {
    if (audioContext) {
      // 检查并激活已存在的 AudioContext
      if (audioContext.state === 'suspended') {
        await audioContext.resume();
        logMessage('音频上下文已激活', 'recognition');
      }
      return;
    }

    try {
      audioContext = new (window.AudioContext || window.webkitAudioContext)({
        sampleRate
      });
      // audioContext = new window.AudioWorkletNode({
      //   sampleRate
      // });
      logMessage('音频上下文初始化成功', 'recognition');
    } catch (error) {
      const errMsg = `音频初始化失败: ${(error as Error).message}`;
      state.value.recognition.error = errMsg;
      logMessage(errMsg, 'recognition');
      throw error;
    }
  };

  // 开始录音
  const startRecording = async () => {
    if (state.value.isRecording) return;
    console.log(state.value.isRecording);
    console.log(
      state.value.recognition.socket?.readyState,
      ' state.value.recognition.socket?.readyState'
    );

    try {
      await initAudioContext();

      if (state.value.recognition.socket?.readyState === WebSocket.CLOSED) {
        connectWebSocket('synthesis');
        return;
      }
      // 获取麦克风权限
      audioStream = await navigator.mediaDevices.getUserMedia({
        audio: {
          echoCancellation: true,
          noiseSuppression: true,
          sampleRate
        }
      });

      audioInput = audioContext!.createMediaStreamSource(audioStream);
      scriptProcessor = audioContext!.createScriptProcessor(bufferSize, 1, 1);
      console.log(scriptProcessor, 'scriptProcessor');

      // 音频处理
      scriptProcessor.onaudioprocess = function (event) {
        // 增加日志验证事件是否触发
        // console.log('onaudioprocess 事件触发', event);
        const inputData = event.inputBuffer.getChannelData(0);
        const pcmData = new Int16Array(inputData.length);

        // 转换为16位PCM
        for (let i = 0; i < inputData.length; i++) {
          pcmData[i] = Math.max(-1, Math.min(1, inputData[i])) * 0x7fff;
        }

        // 发送音频数据
        if (state.value.recognition.socket?.readyState === WebSocket.OPEN) {
          // console.log(pcmData.buffer, 'pcmData.buffer');
          state.value.recognition.socket.send(pcmData.buffer);
        }
      };

      // 连接音频节点
      audioInput.connect(scriptProcessor);
      scriptProcessor.connect(audioContext!.destination);

      state.value.isRecording = true;
      logMessage('录音已开始', 'recognition');
    } catch (error) {
      const errMsg = `录音失败: ${(error as Error).message}`;
      state.value.recognition.error = errMsg;
      logMessage(errMsg, 'recognition');
      state.value.isRecording = false;
      throw error;
    }
  };
  //设置ai语音风格
  const setAiStyle = async (style: string) => {
    if (
      !state.value.synthesis.socket ||
      state.value.synthesis.socket.readyState !== WebSocket.OPEN
    ) {
      throw new Error('合成连接未就绪');
    }
    state.value.synthesis.socket.send(
      JSON.stringify({
        header: {
          message_id: generateUUID(),
          namespace: 'FlowingSpeechSynthesizer',
          name: 'StopSynthesis',
          task_id: state.value.synthesis.taskId,
          appkey: options.appkey
        }
      })
    );
    setTimeout(() => {
      // 发送启动指令
      const startMessages = {
        header: {
          appkey: options.appkey,
          namespace: 'FlowingSpeechSynthesizer',
          name: 'StartSynthesis',
          task_id: state.value.synthesis.taskId,
          message_id: generateUUID()
        },
        payload: {
          voice: style || state.value.voiceStyle,
          format: 'PCM',
          sample_rate: sampleRate,
          volume: 100,
          speech_rate: -200,
          pitch_rate: 0
        }
      };
      console.log(startMessages, 'startMessages', style);
      state.value.synthesis.socket?.send(JSON.stringify(startMessages));
    }, 100);
  };
  // 停止录音
  const stopRecording = () => {
    if (!state.value.isRecording) return;

    try {
      // 清理音频节点
      if (scriptProcessor) {
        scriptProcessor.disconnect();
        scriptProcessor = null;
      }

      if (audioInput) {
        audioInput.disconnect();
        audioInput = null;
      }

      // 停止媒体流
      if (audioStream) {
        audioStream.getTracks().forEach((track) => track.stop());
        audioStream = null;
      }

      // 发送停止指令
      if (
        state.value.recognition.socket?.readyState === WebSocket.OPEN &&
        state.value.recognition.taskId
      ) {
        state.value.recognition.socket.send(
          JSON.stringify({
            header: {
              message_id: generateUUID(),
              namespace: 'SpeechTranscriber',
              name: 'StopTranscription',
              task_id: state.value.recognition.taskId
            }
          })
        );
      }

      state.value.isRecording = false;
      logMessage('录音已停止', 'recognition');
    } catch (error) {
      const errMsg = `停止录音失败: ${(error as Error).message}`;
      state.value.recognition.error = errMsg;
      logMessage(errMsg, 'recognition');
    }
  };

  // 发送文本到AI
  const sendToAI = async (text: string) => {
    state.value.isThinking = true;

    // 创建AI回复占位消息
    const aiMessageId = generateUUID();
    const aiMessage = reactive<Message>({
      messageId: aiMessageId,
      textMsg: '思考中...',
      type: 'text',
      time: new Date().toLocaleTimeString(),
      userType: UserType.Receive,
      end: false
    });
    state.value.msgList.push(aiMessage);
    await nextTick(() => scrollToBottom());
    await new Promise((resolve) => setTimeout(resolve, 10));

    try {
      abortController = new AbortController();
      const aiConfig: any = storages.get('aiConfig') || {};
      const key: AiConfigKeys = 'AI接警无思考-json';
      const agent = aiConfig[key];
      await runAiAgent(
        {
          message: { text },
          stream: false
        },
        'AI接警无思考-json',
        {
          stream: false,
          onText: (text, fullText) => {
            console.log(text, 'text', fullText);
          },
          onJSON: async (jsonData) => {
            if (jsonData?.response_content?.type == 'question') {
              const textMsg: any = jsonData.response_content.content;
              Object.assign(aiMessage, {
                ...aiMessage,
                textMsg: textMsg
              });
              // 实时合成语音
              if (textMsg) {
                startSynthesis(textMsg);
              }
              state.value.aiResObject = {
                ...state.value.aiResObject,
                ...jsonData
              };
              await nextTick(() => scrollToBottom());
              await new Promise((resolve) => setTimeout(resolve, 10));
            }
            console.log(jsonData, 'jsonData');
            if (jsonData.response_content?.type == 'summary') {
              const textMsg: any = jsonData.response_content.content;
              Object.assign(aiMessage, {
                ...aiMessage,
                textMsg: textMsg
              });

              // 实时合成语音
              if (textMsg) {
                startSynthesis(textMsg);
              }
              runAiAgent(
                {
                  message: { text: JSON.stringify(state.value.msgList) },
                  stream: false
                },
                '警情等级智能体',
                {
                  stream: false
                }
              ).then((val) => {
                state.value.aiResObject.caseLevel = val.json?.alert_level;
                console.log(val, 'aiResObject', state.value.aiResObject);
             
                  state.value.aiResObject.end = true;
                });
              });
            }

            await nextTick(() => scrollToBottom());
            await new Promise((resolve) => setTimeout(resolve, 10));
            aiMessage.end = true;
            state.value.isThinking = false;
          },
          onThink: (text) => {
            // aiMessage.textMsg = text;
          },

          onMessage: (message) => {
            console.log(message, 'message');
          },
          onEnd: (obj) => {
            console.log(obj, 'onEnd');
            nextTick(() => scrollToBottom());
            new Promise((resolve) => setTimeout(resolve, 10));
          }
        }
      );
      return;
    } catch (error) {
      if ((error as Error).name !== 'AbortError') {
        aiMessage.textMsg = `出错了: ${(error as Error).message}`;
        logMessage(`AI请求错误: ${(error as Error).message}`, 'recognition');
      }
      return '';
    } finally {
      state.value.isThinking = false;
    }
  };
  // 待接警聊天记录发送文本到ai
  const sendToAIArr = async (text: string, fun: any) => {
    state.value.isThinking = true;

    try {
      abortController = new AbortController();

      await runAiAgent(
        {
          message: { text: text },
          stream: false
        },
        'AI接警无思考-json',
        {
          stream: false,
          onText: (text, fullText) => {
            console.log(text, 'text', fullText);
          },
          onJSON: async (jsonData) => {
            if (jsonData?.response_content?.type == 'question') {
              const textMsg: any = jsonData.response_content.content;

              await nextTick(() => scrollToBottom());
              await new Promise((resolve) => setTimeout(resolve, 10));
            }
            console.log(jsonData, 'jsonData');
            if (jsonData.response_content?.type == 'summary') {
              const textMsg: any = jsonData.response_content.content;
              state.value.aiResObject = {
                ...state.value.aiResObject,
                ...jsonData
              };
              state.value.aiResObject.caller = '';
       
              await runAiAgent(
                {
                  message: { text: text },
                  stream: false
                },
                '警情等级智能体',
                {
                  stream: false
                }
              ).then((val) => {
                state.value.aiResObject.caseLevel = val.json.alert_level;
                console.log(val, 'aiResObject', state.value.aiResObject);
             
              });
            }

            state.value.isThinking = false;
          },
          onThink: (text) => {
            // aiMessage.textMsg = text;
          },

          onMessage: (message) => {
            console.log(message, 'message');
          },
          onEnd: (obj) => {
            console.log(obj, 'onEnd');
            nextTick(() => scrollToBottom());
            new Promise((resolve) => setTimeout(resolve, 10));
          }
        }
      );

      return;
    } catch (error) {
      if ((error as Error).name !== 'AbortError') {
        logMessage(`AI请求错误: ${(error as Error).message}`, 'recognition');
      }
      return '';
    } finally {
      state.value.isThinking = false;
    }
  };

  // 开始语音合成
  const startSynthesis = async (text: string) => {
    if (
      !state.value.synthesis.socket ||
      state.value.synthesis.socket.readyState !== WebSocket.OPEN
    ) {
      throw new Error('合成连接未就绪');
    }

    // if (state.value.isSynthesizing) return;

    try {
      state.value.isSynthesizing = true;
      state.value.synthesis.status = 'synthesizing';
      updateComputedStates();

      const message: WebSocketMessage = {
        header: {
          appkey: options.appkey,
          namespace: 'FlowingSpeechSynthesizer',
          name: 'RunSynthesis',
          task_id: state.value.synthesis.taskId,
          message_id: generateUUID()
        },
        payload: { text }
      };

      // console.log('语言合成 params --->', message);
      state.value.synthesis.socket.send(JSON.stringify(message));
      logMessage(`开始合成文本: ${text}`, 'synthesis');
    } catch (error) {
      const errMsg = `合成失败: ${(error as Error).message}`;
      state.value.synthesis.error = errMsg;
      logMessage(errMsg, 'synthesis');
      state.value.isSynthesizing = false;
      state.value.synthesis.status = 'connected';
      updateComputedStates();
    }
  };

  // 播放音频
  const playAudio = async () => {
    if (state.value.audioData.length === 0) {
      logMessage('没有音频数据可播放', 'synthesis');
      return;
    }

    if (!audioContext) await initAudioContext();

    try {
      state.value.isPlaying = true;
      logMessage('开始播放音频', 'synthesis');

      // 合并音频数据
      const totalLength = state.value.audioData.reduce(
        (sum, buf) => sum + buf.byteLength,
        0
      );
      const merged = new Uint8Array(totalLength);
      let offset = 0;

      state.value.audioData.forEach((buf) => {
        merged.set(new Uint8Array(buf), offset);
        offset += buf.byteLength;
      });

      // 解码并播放
      const audioBuffer = await audioContext!.decodeAudioData(merged.buffer);
      const source = audioContext!.createBufferSource();
      source.buffer = audioBuffer;
      source.connect(audioContext!.destination);
      source.start();

      source.onended = () => {
        state.value.isPlaying = false;
        logMessage('音频播放结束', 'synthesis');
        clearAudio();
      };
    } catch (error) {
      const errMsg = `播放失败: ${(error as Error).message}`;
      state.value.synthesis.error = errMsg;
      logMessage(errMsg, 'synthesis');
      state.value.isPlaying = false;
    }
  };

  // 辅助方法
  const addMessage = (text: string, userType: UserType) => {
    state.value.msgList.push({
      messageId: generateUUID(),
      textMsg: text,
      type: 'text',
      time: new Date().toLocaleTimeString(),
      userType
    });
    scrollToBottom();
  };

  const scrollToBottom = () => {
    if (options.chatListRef.value) {
      options.chatListRef.value.scrollTop =
        options.chatListRef.value.scrollHeight;
    }
  };

  // 公开方法实现
  const startConversation = async () => {
    try {
      await connect(); // 等待识别和合成连接成功

      if (state.value.msgList.length == 0) {
        setTimeout(() => {
          // startDefaultScript();
        }, 1000);
      }
    } catch (error) {
      console.error('启动对话失败:', error);
    }
  };
  function setMsgList(v: any) {
    state.value.msgList = [...state.value.msgList, ...v];
    // sendToAIArr(JSON.stringify(state.value.msgList));
    nextTick(() => scrollToBottom());
  }
  // 默认脚本
  function startDefaultScript(end?: boolean) {
    let text = '您好,深圳119,请问着火了吗?';
    let text2 = [
      '好的消防马上过去了,我这边发一个短信给您,到时候您上传一下图片可以吗?',
      '好的消防已经过去了,谢谢您报警'
    ];
    if (end) {
      text2.forEach((item) => {
        addMessage(item, UserType.Receive);
        startSynthesis(item);
      });
    } else {
      text = '您好,深圳119,请问着火了吗?';
      const text3 = '您好,深圳幺幺九,请问着火了吗?';

      addMessage(text, UserType.Receive);

      startSynthesis(text3);
    }
    scrollToBottom();
  }
  /**
   * 发送 StopSynthesis 指令 断开合成连接
   */
  const sendStopSynthesis = () => {
    // 断开合成连接
    if (state.value.synthesis.socket) {
      if (
        state.value.synthesis.socket.readyState === WebSocket.OPEN &&
        state.value.synthesis.taskId
      ) {
        state.value.synthesis.socket.send(
          JSON.stringify({
            header: {
              message_id: generateUUID(),
              namespace: 'FlowingSpeechSynthesizer',
              name: 'StopSynthesis',
              task_id: state.value.synthesis.taskId,
              appkey: options.appkey
            }
          })
        );
      }
      state.value.synthesis.socket.close();
      state.value.synthesis.socket = null;
      state.value.synthesis.status = 'disconnected';
    }
  };

  const stopConversation = () => {
    stopRecording();
    sendStopSynthesis();
    if (abortController) {
      abortController.abort();
      abortController = null;
    }
  };

  const toggleRecording = async () => {
    state.value.isRecording ? stopRecording() : await startConversation();
  };

  const clearAudio = () => {
    state.value.audioData = [];
  };

  const connect = async () => {
    await Promise.all([
      connectWebSocket('recognition'),
      connectWebSocket('synthesis')
    ]);
  };

  const disconnect = () => {
    // 断开识别连接
    if (state.value.recognition.socket) {
      if (
        state.value.recognition.socket.readyState === WebSocket.OPEN &&
        state.value.recognition.taskId
      ) {
        state.value.recognition.socket.send(
          JSON.stringify({
            header: {
              message_id: generateUUID(),
              namespace: 'SpeechTranscriber',
              name: 'StopTranscription',
              task_id: state.value.recognition.taskId
            }
          })
        );
      }
      state.value.recognition.socket.close();
      state.value.recognition.socket = null;
      state.value.recognition.status = 'disconnected';
    }

    // 断开合成连接
    if (state.value.synthesis.socket) {
      if (
        state.value.synthesis.socket.readyState === WebSocket.OPEN &&
        state.value.synthesis.taskId
      ) {
        state.value.synthesis.socket.send(
          JSON.stringify({
            header: {
              message_id: generateUUID(),
              namespace: 'FlowingSpeechSynthesizer',
              name: 'StopSynthesis',
              task_id: state.value.synthesis.taskId,
              appkey: options.appkey
            }
          })
        );
      }
      state.value.synthesis.socket.close();
      state.value.synthesis.socket = null;
      state.value.synthesis.status = 'disconnected';
    }

    stopRecording();
    updateComputedStates();
  };

  const reconnect = async () => {
    await disconnect();
    await connect();
  };

  const reset = () => {
    disconnect();
    state.value.msgList = [];
    state.value.transcript = '';
    state.value.finalTranscript = '';
    state.value.audioData = [];
    state.value.recognition.logs = [];
    state.value.synthesis.logs = [];
    if (audioContext) {
      audioContext
        .close()
        .catch((err) =>
          logMessage(`关闭音频上下文失败: ${err.message}`, 'recognition')
        );
      audioContext = null;
    }
  };
  function cllear() {
    reset();
    storages.remove(StorageKey.aliyunToken);
  }
  onMounted(async () => {
    const res = await getAliyunToken();
    console.log(res, 'res');
    window.addEventListener('beforeunload', cllear);
    options.token = res || storages.get(StorageKey.aliyunToken);
    // startConversation(); // 初始化连接
  });
  // 组件卸载清理
  onUnmounted(() => {
    reset();
    if (audioContext) {
      audioContext.close().catch(console.error);
    }
  });

  // 初始化计算状态
  updateComputedStates();

  return [
    state,
    {
      startConversation,
      stopConversation,
      toggleRecording,
      playAudio,
      clearAudio,
      connect,
      disconnect,
      reconnect,
      startRecording,
      stopRecording,
      reset,
      setAiStyle,
      startSynthesis,
      logMessage,
      setMsgList,
      sendToAIArr
    }
  ];
}

/**
 * 生成32位随机字符串
 * @returns {string} 随机字符串
 */
export function generateUUID() {
  let d = new Date().getTime();
  let d2 = (performance && performance.now && performance.now() * 1000) || 0;
  return 'xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx'.replace(/[xy]/g, function (c) {
    let r = Math.random() * 16; //random number between 0 and 16
    if (d > 0) {
      r = (d + r) % 16 | 0;
      d = Math.floor(d / 16);
    } else {
      r = (d2 + r) % 16 | 0;
      d2 = Math.floor(d2 / 16);
    }
    return (c == 'x' ? r : (r & 0x3) | 0x8).toString(16);
  });
}
  • audio_player.js
javascript 复制代码
class PCMAudioPlayer {
  constructor(sampleRate) {
    this.sampleRate = sampleRate;
    this.audioContext = null;
    this.audioQueue = [];
    this.isPlaying = false;
    this.currentSource = null;
    const bufferThreshold = 2;
  }

  connect() {
    if (!this.audioContext) {
      this.audioContext = new (window.AudioContext ||
        window.webkitAudioContext)();
    }
  }

  pushPCM(arrayBuffer) {
    this.audioQueue.push(arrayBuffer);
    this._playNextAudio();
  }

  /**
   * 将arrayBuffer转为audioBuffer
   */
  _bufferPCMData(pcmData) {
    const sampleRate = this.sampleRate; // 设置为 PCM 数据的采样率
    const length = pcmData.byteLength / 2; // 假设 PCM 数据为 16 位,需除以 2
    const audioBuffer = this.audioContext.createBuffer(1, length, sampleRate);
    const channelData = audioBuffer.getChannelData(0);
    const int16Array = new Int16Array(pcmData); // 将 PCM 数据转换为 Int16Array

    for (let i = 0; i < length; i++) {
      // 将 16 位 PCM 转换为浮点数 (-1.0 到 1.0)
      channelData[i] = int16Array[i] / 32768; // 16 位数据转换范围
    }
    let audioLength = (length / sampleRate) * 1000;
    console.log(`prepare audio: ${length} samples, ${audioLength} ms`);

    return audioBuffer;
  }

  async _playAudio(arrayBuffer) {
    if (this.audioContext.state === 'suspended') {
      await this.audioContext.resume();
    }

    const audioBuffer = this._bufferPCMData(arrayBuffer);

    this.currentSource = this.audioContext.createBufferSource();
    this.currentSource.buffer = audioBuffer;
    this.currentSource.connect(this.audioContext.destination);

    this.currentSource.onended = () => {
      console.log('Audio playback ended.');
      this.isPlaying = false;
      this.currentSource = null;
      this._playNextAudio(); // Play the next audio in the queue
    };
    this.currentSource.start();
    this.isPlaying = true;
  }

  _playNextAudio() {
    if (this.audioQueue.length > 0 && !this.isPlaying) {
      // 计算总的字节长度
      const totalLength = this.audioQueue.reduce(
        (acc, buffer) => acc + buffer.byteLength,
        0
      );
      const combinedBuffer = new Uint8Array(totalLength);
      let offset = 0;

      // 将所有 audioQueue 中的 buffer 拼接到一个新的 Uint8Array 中
      for (const buffer of this.audioQueue) {
        combinedBuffer.set(new Uint8Array(buffer), offset);
        offset += buffer.byteLength;
      }

      // 清空 audioQueue,因为我们已经拼接完所有数据
      this.audioQueue = [];
      // 发送拼接的 audio 数据给 playAudio
      this._playAudio(combinedBuffer.buffer);
    }
  }
  stop() {
    if (this.currentSource) {
      this.currentSource.stop(); // 停止当前音频播放
      this.currentSource = null; // 清除音频源引用
      this.isPlaying = false; // 更新播放状态
    }
    this.audioQueue = []; // 清空音频队列
    console.log('Playback stopped and queue cleared.');
  }
}

export default PCMAudioPlayer;
相关推荐
qq_348231852 小时前
OpenClaw 完整安装教程
人工智能
杨浦老苏2 小时前
轻量级RSS源处理中间件FeedCraft
人工智能·docker·ai·群晖·rss
平安的平安2 小时前
Python 实现 AI 图像生成:调用 Stable Diffusion API 完整教程
人工智能·python·stable diffusion
IT观测2 小时前
# 聚焦AI驱动数据分析:2026年智能BI工具市场的深度调研与趋势展望报告
人工智能·数据挖掘·数据分析
AIBox3652 小时前
codex api 配置教程:安装、鉴权、Windows 环境变量
javascript·人工智能·windows·gpt
我爱C编程2 小时前
基于CNN卷积神经网络的LDPC译码算法matlab误码率仿真,对比BP译码和MS译码
人工智能·cnn·cnn卷积神经网络·cnn-ldpc·bp译码·ms译码
爱分享的阿Q2 小时前
GitHub趋势-AI工具链生态
人工智能·github
leijiwen2 小时前
BDCM(比干数商模型):打造 Web4.0 会员数商模型,帮企业进入数字商业文明,重构实体经济
大数据·人工智能·重构
步步为营DotNet2 小时前
解锁.NET 11 中 Microsoft.Extensions.AI 在智能后端开发的深度应用
人工智能·microsoft·.net