WebRTC——打造自己的音视频直播平台

之前在写大创项目的时候,当时后端使用的是腾讯云的音视频服务,当时就没有研究过多音视频直播的一些东西,算是与音视频的第一次邂逅吧,虽然没自己去实现这种方法,但是也对音视频直播的实现有了一个大概的思路,就是拉流和推流而腾讯云的音视频直播服务是作为一个转流的一个服务器。 之后自己申请的大创项目是自习直播平台,就想着既然是直播平台,直播就自己去实现吧,当时和后端研究了使用websocket去写,前端录屏去分段发送,然后后端做一个转发,最后前端在合并显示,这个最终实现的效果也不好,也研究了使用RTMP去写,最后因为我的服务器是2核2G的,有点拉跨,就转而使用腾讯云的音视频直播服务了。也算是尝试了音视频直播自己去实现。 截至到目前为止,我使用的就是WebRtc去实现音视频直播,使用node作为一个信令服务器。由于是浏览器原生支持的API,所以在测试功能的时候也没有明显的卡顿。

音视频直播的前置知识

音视频直播其核心就是推流和拉流

推流就是讲开直播的一方把直播音视频画面推流到服务器上,而拉流是将服务器上的音视频画面给拉取到本地,然后去播放

这个环节主要讲解的是前置知识,上边也说过了直播最重要的就是推流和拉流,那么我们就需要去了解跟我们平常推流密不可分的一个API---Navigator,这个是获取计算机的摄像头或麦克风。

注意事项

摄像头和麦克风属于用户的隐私设备,而浏览器为了保护用户的隐私,所以,获取摄像头和麦克风的一些设备是有前提的

  1. 以https协议开头
  2. wss协议开头
  3. localhost
  4. 127.0.0.1
  5. file开头的
  6. chrome插件打开的地址

除了以上条件以外,我们如果执行以下获取音视频设备的API是会报错,因为navigator.mediaDevices.getUserMedia会是undefined

getUserMedia()

  • navigator.mediaDevices.getUserMedia获取设备上的音频设备和摄像头设备
javascript 复制代码
function handleError(error) {
  alert("摄像头无法正常使用,请检查是否占用或缺失")
  console.error('navigator.MediaDevices.getUserMedia error: ', error.message, error.name);
}

function initInnerLocalDevice(){
  const that  = this
  var localDevice = {
    audioIn:[],
    videoIn: [],
    audioOut: []

  }
  let constraints = { video:true, audio: true }
  if (!navigator.mediaDevices || !navigator.mediaDevices.enumerateDevices) {
    console.log("浏览器不支持获取媒体设备");
    return;
  }
  navigator.mediaDevices.getUserMedia(constraints)
    .then(function(stream) {
      stream.getTracks().forEach(trick => {
        trick.stop()
      })
      // 获取所有的音视频设备
      navigator.mediaDevices.enumerateDevices()
        .then(function(devices) {
          devices.forEach(function(device) {
            let obj = {id:device.deviceId, kind:device.kind, label:device.label}
            if(device.kind === 'audioinput'){
              if(localDevice.audioIn.filter(e=>e.id === device.deviceId).length === 0){
                localDevice.audioIn.push(obj)
              }
            }if(device.kind === 'audiooutput'){
              if(localDevice.audioOut.filter(e=>e.id === device.deviceId).length === 0){
                localDevice.audioOut.push(obj)
              }
            }else if(device.kind === 'videoinput' ){
              if(localDevice.videoIn.filter(e=>e.id === device.deviceId).length === 0){
                localDevice.videoIn.push(obj)
              }
            }
          });
        })
        .catch(handleError);

    })
    .catch(handleError);
}

指定分辨率

我们上边使用的constraints是{ video:true, audio: true },除了这种参数,我们还可以获取流媒体的时候指定分辨率

javascript 复制代码
//--------------------①:1--------------------------
{
  audio: true,
    video: {
    width: { min: 320, ideal: 1280, max: 1920 },
    height: { min: 240, ideal: 720, max: 1080 }
  }
}
//--------------------②:2--------------------------
{
  audio: true,
    video: { width: 720, height: 480}
}

指定前置摄像头或后置摄像头(移动端可用)

javascript 复制代码
// 前置摄像头
{ audio: true, video: { facingMode: "user" } }
// 后置摄像头
{ audio: true, video: { facingMode: { exact: "environment" } } }

指定FPS

FPS:指定视频一秒显示多少张图片

javascript 复制代码
const constraints = {
  audio: true,
  video: {
    width:1920,
    height:1080,
    frameRate: { ideal: 10, max: 15 }
  }
};

getDisplayMedia()

用于分享桌面

javascript 复制代码
async function getShareMedia(){
  const constraints = {
    video:{width:1920,height:1080},
    audio:false
  };
  if (window.stream) {
    window.stream.getTracks().forEach(track => {
      track.stop();
    });
  }
  return await navigator.mediaDevices.getDisplayMedia(constraints).catch(handleError);
}

constraints也同getUserMedia()方法的配置对象

常见的需求的使用

共享屏幕

javascript 复制代码
const startDesk = async () => {
  try {
    const stream = await navigator.mediaDevices.getDisplayMedia({
      audio: true,
      video: true
    })
    const audioTrack = await navigator.mediaDevices.getUserMedia({ audio: true });
    // 添加声音轨道
    stream.addTrack(audioTrack.getAudioTracks()[0]);
    videoDesk.srcObject = stream
  } catch (error) {
    console.log(error);
  }
}

共享摄像头

javascript 复制代码
const startSchema = () => {
  navigator.mediaDevices.getUserMedia({
    // 除了这些还可以指定分辨率,以及前置摄像头和后者摄像头,FPS
    audio: true,
    video: true
  })
    .then((stream) => {
      video.srcObject = stream;
    })
    .catch(err => {
      console.log(err);
    })
}

摄像头视频流和屏幕视频流合并

摄像头视频流和屏幕视频流合并可以借助合并视频流来实现

javascript 复制代码
async function assignSchema() {
  let localstream = await await navigator.mediaDevices.getUserMedia({
    video: {
      width: 1920,
      height: 1080,
      frameRate: { ideal: 15, max: 24 },
    },
  })
  let shareStream = await await navigator.mediaDevices.getDisplayMedia({
    video: { width: 1920, height: 1080 },
    audio: false,
  })
  let mergerVideo = new VideoStreamMerger({ fps: 24, clearRect: true })
  mergerVideo.addStream(shareStream, {
    x: 0,
    y: 0,
    width: mergerVideo.width,
    height: mergerVideo.height,
    mute: true,
  })
  mergerVideo.addStream(localstream, {
    x: 0,
    y: 0,
    width: 200,
    height: 150,
    mute: false,
  });
  mergerVideo.start()
  videoSchema.srcObject = mergerVideo.result
}

完整案例

html 复制代码
<!DOCTYPE html>
<html lang="en">

  <head>
    <meta charset="UTF-8">
    <meta name="viewport" content="width=device-width, initial-scale=1.0">
    <title>Document</title>
  </head>

  <body>
    <div style="display: flex;margin-bottom: 20px;">
      <!-- 显示头像 -->
      头像
      <video src="" id="video" controls width="500px" height="500px" style="margin-right: 20px;"></video>
      <!-- 显示共享桌面 -->
      桌面
      <video src="" id="videoDesk" controls width="500px" height="500px"></video>
    </div>
    <button onclick="startSchema()">开启摄像头</button>
    <button onclick="startDesk()">共享桌面</button>
    <button onclick="assignSchema()">合并视频流</button>
    <video src="" id="videoSchema" controls width="500px" height="500px"></video>
    <!-- 这里引入了合并的第三方库 -->
    <script src="./video-stream-merger.js"></script>
    <script>
      const video = document.getElementById('video')
      const videoDesk = document.getElementById('videoDesk')
      const videoSchema = document.getElementById('videoSchema')
      const startSchema = () => {
        navigator.mediaDevices.getUserMedia({
          // 除了这些还可以指定分辨率,以及前置摄像头和后者摄像头,FPS
          audio: true,
          video: true
        })
          .then((stream) => {
            video.srcObject = stream;
          })
          .catch(err => {
            console.log(err);
          })
      }
      const startDesk = async () => {
        try {
          const stream = await navigator.mediaDevices.getDisplayMedia({
            audio: true,
            video: true
          })
          const audioTrack = await navigator.mediaDevices.getUserMedia({ audio: true });
          // 添加声音轨道
          stream.addTrack(audioTrack.getAudioTracks()[0]);
          videoDesk.srcObject = stream
        } catch (error) {
          console.log(error);
        }
      }

      async function assignSchema() {
        let audioId = null
        let videoId = null
        let localstream = await await navigator.mediaDevices.getUserMedia({
          audio: { deviceId: audioId ? { exact: audioId } : undefined },
          video: {
            deviceId: videoId ? { exact: videoId } : undefined,
            width: 1920,
            height: 1080,
            frameRate: { ideal: 15, max: 24 },
          },
        })
        let shareStream = await await navigator.mediaDevices.getDisplayMedia({
          video: { width: 1920, height: 1080 },
          audio: false,
        })
        let mergerVideo = new VideoStreamMerger({ fps: 24, clearRect: true })
        mergerVideo.addStream(shareStream, {
          x: 0,
          y: 0,
          width: mergerVideo.width,
          height: mergerVideo.height,
          mute: true,
        })
        mergerVideo.addStream(localstream, {
          x: 0,
          y: 0,
          width: 200,
          height: 150,
          mute: false,
        });
        mergerVideo.start()
        videoSchema.srcObject = mergerVideo.result
      }
    </script>
  </body>

</html>

直播常用技术

常用的技术包括:

  1. RTMP(Real-Time Messaging Protocol):由Adobe公司开发的一种实时流媒体传输协议,支持音视频的实时传输和播放。
  2. WebRTC(Web Real-Time Communication):一种基于Web浏览器的实时通信技术,支持音视频的实时传输和交互。
  3. HLS(HTTP Live Streaming):苹果公司开发的一种实时流媒体传输协议,支持音视频的实时传输和播放。
  4. DASH(Dynamic Adaptive Streaming over HTTP):一种基于HTTP协议的动态自适应流媒体传输技术,支持音视频的实时传输和播放。
  5. SIP(Session Initiation Protocol):一种用于建立、修改和终止会话的协议,用于实现音视频通话和会议。
  6. H.323:一种用于音视频通信的标准协议,支持实时音视频传输和会议控制。

WebRTC优势:

  1. 主流浏览器支持,开发者容易入手,使用范围广且开源成熟
  2. 毫秒级延迟(虽然RTMP/FLV流警告N层CDN再播放延迟更低,但是WebRTC在浏览器端有成熟的API,代码量更少)

PeerConnection

上边我们了解了推流的的前置操作,接下来我们就来了解一下WebRtc如何将两个浏览器关联起来,及视频的推流和拉流。

PeerConnection是整个WebRtc通话最重要的一部分,另外也需要注意一点就是虽然WebRtc已经成为WebRtc的标准了,但是并不是所有的浏览器都满足这些标准的,因为国内很多的浏览器内核都是基于谷歌内核,因此WebRtc在大多数浏览器中也是兼容的。常用的兼容WebRtc的浏览器:Chrome、360、Edge、火狐、Safari

为了兼容不同的浏览器我们可以使用以下方法来获取

javascript 复制代码
var PeerConnection = window.RTCPeerConnection ||
  window.mozRTCPeerConnection ||
  window.webkitRTCPeerConnection;

核心方法:

  • addIceCandidate():保存ICE候选信息,双方协商信息,持续整个建立通信过程,直到没有更多候选信息
  • addTrack():添加音视频轨道
  • createAnswer():创建应答信令
  • createDataChannel():创建消息信道,建立WebRtc通信之后,就可以P2P直接发送文本信息,无需中转服务器
  • createOffer():创建初始信令
  • setRemoteDescription():保存远端发送来的信令
  • setLocalDescription():保存自己端创建的信令

除了上述方法,还有一些监听函数,这些监听函数用于监听远程发送过来的信息

  • ondatachannel:创建datachannel后监听回调以及P2P消息监听
  • ontrack:监听远程媒体轨道即远端音视频信息
  • onicecandidate:ICE候选监听

WebRtc的会话流程

以A呼叫B

  1. A呼叫B一般通过WebSocket(或其他实时通信协议)来让对方能接收到信息
  2. B接受应答,A和B开始初始化PeerConnection实例,用于关联A和B的会话信息
  3. A调用createOffer创建信令,同时通过setLocalDescription方法在本地实例PeerConnection中存储一份
  4. A通过信令服务器将信令转发给B
  5. B接受到A的信令之后,调用setRemoteDescription将器存储在初始化好的PeerConnection中
  6. B调用createAnswer创建应答,并调用setLocalDescription存储在本地PeerConnection实例中
  7. B将自己创建的应答通过服务器转发给A
  8. A调用setRemoteDescription将B的应答存储在本地PeerConnection实例中
  9. 完成通话

音视频会议实践

会议后台------信令服务器

可以全局或者本地下载我编写的一个脚手架来查看

bash 复制代码
yarn add @xy0711/xytem -g
bash 复制代码
xytem demo

创建对应的会议后台和前台模板

多对多具体代码

下载第三方库

bash 复制代码
yarn add socket.io-client
typescript 复制代码
import { io } from "socket.io-client";
// 合并视频流的第三方库
import { VideoStreamMerger } from "./mergeStream";

const $serverSocketUrl = "ws://127.0.0.1:18080";

function handleError(error: Error) {
  // alert("摄像头无法正常使用,请检查是否占用或缺失")
  console.error(
    "navigator.MediaDevices.getUserMedia error: ",
    error.message,
    error.name,
  );
}

var PeerConnection = window.RTCPeerConnection;

// 存储所有的建立连接(一对多)
var RtcPcMaps = new Map();

class CallupMany {
  formInline: any = {};
  localStream: any;
  linkSocket: any;
  centerDialogVisible: boolean = false;
  roomUserList: any[] = [];
  rtcPcParams = {
    iceServers: [],
  };
  mediaStatus = {
    audio: false,
    video: false,
  };
  setUserList: any;
  mergerVideo: VideoStreamMerger | null = null;

  // 进入会议初始化
  init(nickname: any, roomId: any, userId: any, setUserList: any) {
    this.formInline.nickname = nickname;
    this.formInline.roomId = roomId;
    this.formInline.userId = userId;
    this.setUserList = setUserList;
    this.clientWS();
  }
  // 连接到ws(初始化ws监听事件)
  clientWS() {
    const that = this;
    this.linkSocket = io($serverSocketUrl, {
      reconnectionDelayMax: 10000,
      transports: ["websocket"],
      query: that.formInline,
    });
    this.linkSocket.on("connect", async (_e: any) => {
      that.centerDialogVisible = false; //加入后
      //获取房间用户列表(新用户进房间后需要和房间内每个用户进行RTC连接 后进入着主动push offer)
      setTimeout(() => {
        that.linkSocket.emit("roomUserList", {
          roomId: that.formInline.roomId,
        });
      }, 500);
    });
    // 用户列表(加入会议的所有人)
    this.linkSocket.on("roomUserList", (e: any) => {
      that.roomUserList = e;
      that.setUserList(e);
      //拿到房间用户列表之后开始建立RTC连接
      that.initMeetingRoomPc();
    });
    // 接收到消息
    this.linkSocket.on("msg", async (e: any) => {
      if (e["type"] === "join" || e["type"] === "leave") {
        const userId = e["data"]["userId"];
        const nickname = e["data"]["nickname"];
        // 加入房间
        if (e["type"] === "join") {
          that.roomUserList.push({
            userId: userId,
            nickname: nickname,
            roomId: that.formInline.roomId,
          });
        } else {
          // 离开房间
          RtcPcMaps.delete(that.formInline.userId + "-" + userId);
          that.removeChildVideoDom(userId);
        }
      }
      // 收到信令
      if (e["type"] === "offer") {
        await that.onRemoteOffer(e["data"]["userId"], e["data"]["offer"]);
      }
      // 收到应答
      if (e["type"] === "answer") {
        await that.onRemoteAnswer(e["data"]["userId"], e["data"]["answer"]);
      }
      // 收到候选信息
      if (e["type"] === "candidate") {
        that.onCandiDate(e["data"]["userId"], e["data"]["candidate"]);
      }
    });
    // ws连接出现错误
    this.linkSocket.on("error", (e: any) => {
      console.log("error", e);
    });
  }

  // 共享桌面
  async shareDesk() {
    this.mergerVideo?.destroy();
    this.mergerVideo = null;
    const newStream = await this.getLocalDeskMedia();
    this.localStream = newStream;
    this.replaceRemoteStream(newStream);
  }
  // 开启摄像头(合并后的音视频)
  async shareAssignSchema() {
    this.mergerVideo?.destroy();
    this.mergerVideo = null;
    const newStream = await this.getAssignMedia();
    this.localStream = newStream;
    this.replaceRemoteStream(newStream);
  }
  // 显示默认视频流(只有摄像头)
  async shareDefault() {
    this.mergerVideo?.destroy();
    this.mergerVideo = null;
    const newStream = await this.getLocalUserMedia();
    this.localStream = newStream;
    this.replaceRemoteStream(newStream);
  }

  // 获取摄像头设备信息
  async getLocalUserMedia() {
    const audioId = this.formInline.audioInId;
    const videoId = this.formInline.videoId;
    const constraints = {
      audio: { deviceId: audioId ? { exact: audioId } : undefined },
      video: {
        deviceId: videoId ? { exact: videoId } : undefined,
        width: 640,
        height: 480,
        frameRate: { ideal: 20, max: 24 },
      },
    };
    if ((window as any).stream) {
      (window as any).stream.getTracks().forEach((track: any) => {
        track.stop();
      });
    }
    return await navigator.mediaDevices
      .getUserMedia(constraints)
      .catch(handleError);
  }
  // 获取共享桌面视频流
  async getLocalDeskMedia() {
    const stream = await navigator.mediaDevices.getDisplayMedia({
      audio: true,
      video: true,
    });
    const audioTrack = await navigator.mediaDevices.getUserMedia({
      audio: true,
    });
    // 添加声音轨道
    stream.addTrack(audioTrack.getAudioTracks()[0]);
    return stream;
  }
  // 获取合并后的视频流
  async getAssignMedia() {
    const audioId = this.formInline.audioInId;
    const videoId = this.formInline.videoId;
    let localstream = await navigator.mediaDevices.getUserMedia({
      audio: { deviceId: audioId ? { exact: audioId } : undefined },
      video: {
        deviceId: videoId ? { exact: videoId } : undefined,
        width: 1920,
        height: 1080,
        frameRate: { ideal: 15, max: 24 },
      },
    });
    let shareStream = await navigator.mediaDevices.getDisplayMedia({
      video: { width: 1920, height: 1080 },
      audio: false,
    });
    this.mergerVideo = new VideoStreamMerger({ fps: 24, clearRect: true });
    this.mergerVideo.addStream(shareStream, {
      x: 0,
      y: 0,
      width: this.mergerVideo.width,
      height: this.mergerVideo.height,
      mute: true,
    });
    this.mergerVideo.addStream(localstream, {
      x: 0,
      y: 0,
      width: 200,
      height: 150,
      mute: false,
    });
    this.mergerVideo.start();
    return this.mergerVideo.result;
  }

  // 切换远程流
  async replaceRemoteStream(newStream: any) {
    // 先关闭原始的音视频流
    if ((window as any).stream) {
      (window as any).stream.getTracks().forEach((track: any) => {
        track.stop();
      });
    }
    await this.setDomVideoStream("localdemo01", newStream);
    const [videoTrack] = newStream.getVideoTracks();
    //多个RTC关联
    RtcPcMaps.forEach((e) => {
      const senders = e.getSenders();
      const send = senders.find((s: any) => s.track.kind === "video");
      send.replaceTrack(videoTrack);
    });
  }
  // 遍历建立联系
  async initMeetingRoomPc() {
    const that = this;
    if (!this.localStream) {
      this.localStream = await this.getLocalUserMedia();
      //开始静音和关闭摄像头
      this.initMediaStatus();
    }
    this.setDomVideoStream("localdemo01", this.localStream);
    const localUid = this.formInline.userId;
    let others = this.roomUserList
      .filter((e: any) => e.userId !== localUid)
      .map((e: any, index: any) => {
        return e.userId;
      });
    // 遍历其他用户,建立连接
    others.forEach(async (uid: any) => {
      let pcKey = localUid + "-" + uid;
      let pc = RtcPcMaps.get(pcKey);
      if (!pc) {
        pc = new PeerConnection(that.rtcPcParams);
        RtcPcMaps.set(pcKey, pc);
      }
      for (const track of that.localStream.getTracks()) {
        pc.addTrack(track);
      }
      //创建offer
      let offer = await pc.createOffer({ iceRestart: true });
      //设置offer未本地描述
      await pc.setLocalDescription(offer);
      //发送offer给被呼叫端
      let params = { targetUid: uid, userId: localUid, offer: offer };
      that.linkSocket.emit("offer", params);
      that.onPcEvent(pc, localUid, uid);
    });
  }
  // 设置本地音视频
  async setDomVideoStream(domId: any, newStream: any) {
    let video = document.getElementById(domId) as any;
    let stream = video.srcObject;
    if (stream) {
      stream.getAudioTracks().forEach((e: any) => {
        stream.removeTrack(e);
      });
      stream.getVideoTracks().forEach((e: any) => {
        stream.removeTrack(e);
      });
    }
    video.srcObject = newStream;
    video.muted = true;
  }
  // 会议有人离开了,把对应的video元素给删除掉
  removeChildVideoDom(domId: any) {
    let video = document.getElementById(domId) as any;
    if (video) {
      video.parentNode.removeChild(video);
    }
  }
  // 有人进入会议创建对应的video元素
  createRemoteDomVideoStream(domId: any, trick: any) {
    let parentDom = document.getElementById("allVideo") as any;
    let id = domId + "-media";
    let video = document.getElementById(id) as any;
    if (!video) {
      video = document.createElement("video");
      video.id = id;
      video.controls = true;
      video.autoplay = true;
      video.muted = true;
      video.style.width = "100%";
      video.style.height = "100%";
    }
    let stream = video.srcObject;
    if (stream) {
      stream.addTrack(trick);
    } else {
      let newStream = new MediaStream();
      newStream.addTrack(trick);
      video.srcObject = newStream;
      video.muted = false;
      parentDom.appendChild(video);
    }
  }
  // 事件监听
  onPcEvent(pc: any, localUid: any, remoteUid: any) {
    const that = this;
    pc.ontrack = function (event: any) {
      that.createRemoteDomVideoStream(remoteUid, event.track);
    };
    pc.onicecandidate = (event: any) => {
      if (event.candidate) {
        that.linkSocket.emit("candidate", {
          targetUid: remoteUid,
          userId: localUid,
          candidate: event.candidate,
        });
      } else {
        /* 在此次协商中,没有更多的候选了 */
        console.log("在此次协商中,没有更多的候选了");
      }
    };
  }
  // 候选信息
  onCandiDate(fromUid: any, candidate: any) {
    const localUid = this.formInline.userId;
    let pcKey = localUid + "-" + fromUid;
    let pc = RtcPcMaps.get(pcKey);
    pc.addIceCandidate(candidate);
  }
  // 创建offer
  async onRemoteOffer(fromUid: any, offer: any) {
    const localUid = this.formInline.userId;
    let pcKey = localUid + "-" + fromUid;
    let pc = RtcPcMaps.get(pcKey);
    if (!pc) {
      pc = new PeerConnection(this.rtcPcParams);
      RtcPcMaps.set(pcKey, pc);
    }
    this.onPcEvent(pc, localUid, fromUid);
    for (const track of this.localStream.getTracks()) {
      pc.addTrack(track);
    }
    // this.localStream.getAudioTracks[0];
    await pc.setRemoteDescription(offer);
    let answer = await pc.createAnswer();
    await pc.setLocalDescription(answer);
    let params = { targetUid: fromUid, userId: localUid, answer: answer };
    this.linkSocket.emit("answer", params);
  }
  // 创建answer
  async onRemoteAnswer(fromUid: any, answer: any) {
    const localUid = this.formInline.userId;
    let pcKey = localUid + "-" + fromUid;
    let pc = RtcPcMaps.get(pcKey);
    await pc.setRemoteDescription(answer);
  }
  // 打开或关闭麦克风
  audioControl(b: any) {
    RtcPcMaps.forEach((v, k) => {
      const senders = v.getSenders();
      const send = senders.find((s: any) => s.track.kind === "audio");
      send.track.enabled = b;
      this.mediaStatus.audio = send.track.enabled;
    });
    this.localStream.getAudioTracks()[0].enabled = b;
    this.mediaStatus.audio = b;
  }
  // 打开或关闭视频
  videoControl(b: any) {
    RtcPcMaps.forEach((v, k) => {
      const senders = v.getSenders();
      const send = senders.find((s: any) => s.track.kind === "video");
      send.track.enabled = b;
      this.mediaStatus.video = send.track.enabled;
    });
    this.localStream.getVideoTracks()[0].enabled = b;
    this.mediaStatus.video = b;
  }
  // 默认静音和关闭摄像头
  initMediaStatus() {
    // this.localStream.getVideoTracks()[0].enabled = false;
    // this.localStream.getAudioTracks()[0].enabled = false;
    // console.log("进入房间默认已关闭你的麦克风和摄像头,请手动打开");
  }
  // 获取对应dom元素对应的音频状态
  getAudioStatus(domId: any) {
    console.log("domId", domId);
    let video = document.getElementById(domId) as any;
    let stream = video.srcObject;
    return stream.getAudioTracks()[0].enabled;
  }
}
const callupMeeting = new CallupMany();
export default callupMeeting;
相关推荐
小白学习日记37 分钟前
【复习】HTML常用标签<table>
前端·html
john_hjy40 分钟前
11. 异步编程
运维·服务器·javascript
风清扬_jd1 小时前
Chromium 中JavaScript Fetch API接口c++代码实现(二)
javascript·c++·chrome
丁总学Java1 小时前
微信小程序-npm支持-如何使用npm包
前端·微信小程序·npm·node.js
yanlele1 小时前
前瞻 - 盘点 ES2025 已经定稿的语法规范
前端·javascript·代码规范
It'sMyGo1 小时前
Javascript数组研究09_Array.prototype[Symbol.unscopables]
开发语言·javascript·原型模式
懒羊羊大王呀2 小时前
CSS——属性值计算
前端·css
xgq2 小时前
使用File System Access API 直接读写本地文件
前端·javascript·面试
李是啥也不会2 小时前
数组的概念
javascript
用户3157476081352 小时前
前端之路-了解原型和原型链
前端