Android AIDL 跨进程传递音视频数据

简介

Android AIDL传递常规的数据类型有

  • java基础类型(int、string、long)等

  • Map数组

  • Parcelable实体类

  • 其他等等

    重点介绍传递句柄Surface类型,查看源码可以看到其实内部也是继承了Parcelable,在跨进程传递大数据频繁&高效情况下如视频音频通过句柄传递处理。

    设计场景如在人脸识别后,进行语音识别处理,在业务量大情况下多进程处理人脸识别,语音处理涉及到主进程与其他子进程频繁交互,保证子进程异常下,不影响主进程运行,那么在子进程处理数据传递就涉及如此

视频传递

人脸数据实时获取,通过camera获取视频流 通过Surfaceview布局预览

ini 复制代码
  <SurfaceView
    android:id="@+id/tvPreview"
    android:layout_width="match_parent"
    android:layout_height="match_parent"/>

在主进程中通过SurfaceVeiw通过holder获取到句柄Surface,通过AIDL方式传递到子进程中

csharp 复制代码
import android.view.Surface;
// Declare any non-default types here with import statements

interface IFaceService {
    void startFaceCamera(in Surface surface);
    void stopFaceCamera();
}

在子进程中拿到对应句柄surface,在service中处理camera数据

kotlin 复制代码
import android.Manifest
import android.content.Context
import android.content.pm.PackageManager
import android.graphics.ImageFormat
import android.hardware.camera2.CameraAccessException
import android.hardware.camera2.CameraCaptureSession
import android.hardware.camera2.CameraCharacteristics
import android.hardware.camera2.CameraDevice
import android.hardware.camera2.CameraManager
import android.hardware.camera2.CaptureRequest
import android.media.ImageReader
import android.os.Handler
import android.os.HandlerThread
import android.util.Range
import android.view.Surface
import androidx.core.app.ActivityCompat
import com.unisound.state.machine.util.LogUtils


internal object UniCameraManager {

    private const val TAG = "UManager"

    private var mCameraThread: HandlerThread? = null
    private var mCameraHandler: Handler? = null

    //Camera2
    private var mCameraDevice: CameraDevice? = null

    private var mCameraId: String? = null

    //默认选择前置摄像头
    private const val DEFAULT_CAMERA_ID = CameraCharacteristics.LENS_FACING_FRONT
    private const val DEFAULT_SIZE_WIDTH = 640
    private const val DEFAULT_SIZE_HEIGHT = 480

    private var mPreviewBuilder: CaptureRequest.Builder? = null
    private var mCaptureRequest: CaptureRequest? = null
    private var mPreviewSession: CameraCaptureSession? = null
    private var characteristics: CameraCharacteristics? = null
    private var fpsRanges: Array<Range<Int>> = arrayOf()

    private var mSurface: Surface? = null
    private var mImageReader: ImageReader? = null
    private var mPreviewData: ICameraPreviewListener? = null

    fun setCameraPreviewListener(listener: ICameraPreviewListener){
        mPreviewData = listener
    }

    fun startCamera(context: Context, surface: Surface?) {
        mSurface = surface

        mImageReader = ImageReader.newInstance(
            DEFAULT_SIZE_WIDTH, DEFAULT_SIZE_HEIGHT, ImageFormat.YUV_420_888, 1
        )

        mCameraThread = HandlerThread("CameraServerThread")
        mCameraThread?.start()
        mCameraHandler = mCameraThread?.looper?.let { Handler(it) }
        setupCamera(context)
        openCamera(context, mCameraId.toString())

        mImageReader?.setOnImageAvailableListener(ImageReaderListener, mCameraHandler)
    }

    private fun setupCamera(context: Context) {
        val manager = context.getSystemService(Context.CAMERA_SERVICE) as CameraManager
        try {
            //0表示后置摄像头,1表示前置摄像头
            mCameraId = getCameraId(manager)

            characteristics = manager.getCameraCharacteristics(mCameraId.toString())
            fpsRanges =
                characteristics?.get(CameraCharacteristics.CONTROL_AE_AVAILABLE_TARGET_FPS_RANGES) as Array<Range<Int>>
            LogUtils.d(TAG, "fpsRanges:${fpsRanges.contentToString()}")
        } catch (e: java.lang.Exception) {
            e.printStackTrace()
        }
    }

    /**
     * 获取camera id
     */
    private fun getCameraId(manager: CameraManager): String {
        var id = ""
        manager.cameraIdList.forEach { cameraId ->
            val characteristics = manager.getCameraCharacteristics(cameraId)
            val facing = characteristics.get(CameraCharacteristics.LENS_FACING)

            if (facing != null && facing == DEFAULT_CAMERA_ID) {
                id = cameraId
                return@forEach
            }
            //拿不到指定位置摄像头,默认选择最后一个匹配
            if (id.isEmpty()) {
                id = cameraId
            }
        }
        return id
    }

    /**
     * ******************************openCamera(打开Camera)*****************************************
     */
    private fun openCamera(context: Context, cameraId: String) {
        //获取摄像头的管理者CameraManager
        val manager = context.getSystemService(Context.CAMERA_SERVICE) as CameraManager
        //检查权限
        try {
            if (ActivityCompat.checkSelfPermission(
                    context, Manifest.permission.CAMERA
                ) != PackageManager.PERMISSION_GRANTED
            ) {
                LogUtils.e(TAG, "camera permission not allow")
                return
            }
            manager.openCamera(cameraId, mStateCallback, mCameraHandler)
            LogUtils.d(TAG, "openCamera")
        } catch (e: CameraAccessException) {
            e.printStackTrace()
        }
    }


    private val mStateCallback: CameraDevice.StateCallback = object : CameraDevice.StateCallback() {
        override fun onOpened(camera: CameraDevice) {
            LogUtils.d(TAG, "StateCallback:onOpened")
            mCameraDevice = camera
            startPreview()
        }

        override fun onDisconnected(cameraDevice: CameraDevice) {
            LogUtils.d(TAG, "StateCallback:onDisconnected")
            cameraDevice.close()
            mCameraDevice = null
        }

        override fun onError(cameraDevice: CameraDevice, error: Int) {
            LogUtils.d(TAG, "StateCallback:onError:$error")
            cameraDevice.close()
            mCameraDevice = null
        }
    }

    private fun startPreview() {
        LogUtils.d(TAG, "startPreview")
        if (null == mCameraDevice) {
            return
        }
        try {
            closePreviewSession()
            //创建CaptureRequestBuilder,TEMPLATE_PREVIEW表示预览请求
            mPreviewBuilder = mCameraDevice?.createCaptureRequest(CameraDevice.TEMPLATE_PREVIEW)

            //默认预览不开启闪光灯
            mPreviewBuilder?.set(CaptureRequest.FLASH_MODE, CaptureRequest.FLASH_MODE_OFF)
            //设置预览画面的帧率
            if (fpsRanges.isNotEmpty()) {
                mPreviewBuilder?.set(
                    CaptureRequest.CONTROL_AE_TARGET_FPS_RANGE, fpsRanges[0]
                )
            }

            val list = arrayListOf<Surface>()

            //设置Surface作为预览数据的显示界面
            mSurface?.let {
                mPreviewBuilder?.addTarget(it)
                list.add(it)
            }

            mImageReader?.surface?.let {
                mPreviewBuilder?.addTarget(it)
                list.add(it)
            }

            mCameraDevice?.createCaptureSession(
                list, object : CameraCaptureSession.StateCallback() {
                    override fun onConfigured(session: CameraCaptureSession) {
                        LogUtils.d(TAG, "onConfigured")
                        try {
                            //创建捕获请求
                            mCaptureRequest = mPreviewBuilder?.build()
                            mPreviewSession = session
                            //不停的发送获取图像请求,完成连续预览
                            mPreviewSession?.setRepeatingRequest(
                                mCaptureRequest!!, null, mCameraHandler
                            )
                        } catch (e: Exception) {
                            e.printStackTrace()
                        }
                    }

                    override fun onConfigureFailed(session: CameraCaptureSession) {}
                }, mCameraHandler
            )
        } catch (e: Exception) {
            e.printStackTrace()
            LogUtils.e(TAG, "startPreview failed:$e")
        }
    }

    //清除预览Session
    private fun closePreviewSession() {
        if (mPreviewSession != null) {
            mPreviewSession?.close()
            mPreviewSession = null
        }
    }

    private val ImageReaderListener = ImageReader.OnImageAvailableListener { imageReader ->
        val image = imageReader.acquireNextImage()
        mPreviewData?.onPreviewData(ImageUtil.getBytesFromImageAsType(image, ImageUtil.NV21))
        image.close()
    }

    fun stopCamera() {
        try {
            mPreviewSession?.close()
            mPreviewSession = null

            mCameraDevice?.close()
            mCameraDevice = null

            mCameraHandler?.removeCallbacksAndMessages(null)
            mImageReader?.close()
        } catch (e: Exception) {
            e.printStackTrace()
            LogUtils.e(TAG, "stopCamera failed:$e")
        }
    }

}

通过主进程拿到的surface设置到target中作为数据预览画面显示,图片处理大小640x480 创建一个ImageReader中获取数据源作为人脸识别、检测等操作,通过监听ImageReader回调获取到的image 转换图片为NV21格式输入到人脸库中处理

ini 复制代码
private val ImageReaderListener = ImageReader.OnImageAvailableListener { imageReader ->
	val image = imageReader.acquireNextImage()
	mPreviewData?.onPreviewData(ImageUtil.getBytesFromImageAsType(image, ImageUtil.NV21))
	image.close()
}

这样就完成主进程实时画面的预览和子进程处理camera数据后人脸检测、识别等操作,将最终结果返回给主进程状态处理。分析可得在处理数据都是通过surface句柄,ImageReader中可以获取到surface,而还有与之对应的ImageReader中是可以通过surface创建,那么原理想通,通用的道理可以用来传递音频数据通过surface。

音频传递

在语音识别中,对于音频数据实时率处理频繁,通过上述视频流传递原理用到的surface,同样可以在音频传递中处理,在主进程中打开音频获取数据,(音频来源可能是获取系统Android原生录音,或者外载USB连接麦克风),在子进程中创建ImageReader获取surface传递给主进程做数据传递处理,定义aidl接口如下:

java 复制代码
import android.view.Surface;

// Declare any non-default types here with import statements

interface IAudioCallback {
    void onSurface(in Surface surface);
}

在子进程中创建surface

kotlin 复制代码
import android.graphics.ImageFormat
import android.media.Image
import android.media.ImageReader
import android.os.Handler
import android.os.HandlerThread
import android.os.SystemClock
import android.view.Surface
import com.unisound.sdk.utils.PcmUtils
import com.unisound.sdk.utils.StringUtils
import com.unisound.state.machine.util.LogUtils
import java.nio.ByteBuffer
import java.nio.ByteOrder


internal class AsrAudioManager(
    private val mCallBack: OnAudioListener,
    private val audioSize: Int = 100 * 32 / 2,
) {

    companion object {
        private const val TAG = "AudioManager"

        private const val WIDTH = 640
        private const val HEIGHT = 480
        private const val MAX_IMAGES = 1
    }

    private var mHandlerThread: HandlerThread? = null
    private var mHandler: Handler? = null
    private var mImageReader: ImageReader? = null

    @Volatile
    private var isSurfaceValid = false

    fun createSurface(onSurface: (surface: Surface?) -> Unit) {
        LogUtils.d(TAG, "create surface start .")
        mHandlerThread = HandlerThread("AsrServerThread")
        mHandlerThread?.start()
        mHandler = mHandlerThread?.looper?.let { Handler(it) }
        mImageReader = ImageReader.newInstance(
            WIDTH, HEIGHT,
            ImageFormat.YUV_420_888, MAX_IMAGES
        )
        mImageReader?.setOnImageAvailableListener({ reader ->
            val image = reader?.acquireNextImage()
            if (image != null) {
                val data = extractAudioFromYUV420(image, audioSize)
                mCallBack.onAudioData(data, data.size)
            }
            image?.close()
        }, mHandler)
        isSurfaceValid = true
        onSurface.invoke(mImageReader?.surface)
    }

    fun releaseSurface() {
        LogUtils.d("releaseSurface")
        isSurfaceValid = false
        mHandlerThread?.quitSafely()
        mHandlerThread = null
        mHandler = null
    }


    private fun extractAudioFromYUV420(image: Image, originalLength: Int): ByteArray {
        val planes = image.planes
        val yBuffer = planes[0].buffer
        val audioData = ShortArray(originalLength)
        for (i in 0 until originalLength) {
            val highByte = yBuffer.get()
            audioData[i] = (highByte.toInt() shl 8).toShort()
        }
        val byteBuffer = ByteBuffer.allocate(audioData.size * 2).order(ByteOrder.LITTLE_ENDIAN)
        val shortBuffer = byteBuffer.asShortBuffer()
        shortBuffer.put(audioData)
        return byteBuffer.array()
    }

}

如上,在创建ImageReader设置图片大小640x480,获取到surface后通过aidl传递改句柄到主进程创建ImageWriter来写入数据,主进程写入数据,在子进程中ImageReader监听回调来通过获取到image解析音频数据,先看下如何将主进程获取到buffer音频数据写入到句柄中, 如下:

kotlin 复制代码
internal class AudioCallback : IAudioCallback.Stub(), IAudioSourceListener {

    private var mCallback: IAuidoListener? = null
    private var mImageWriter: ImageWriter? = null

    init {
        AudioSource.setCallBack(this)
    }

    fun setAudioCallback(listener: IAudioListener) {
        this.mCallback = listener
    }

    override fun onAudioData(data: ByteArray?) {
        if (data == null) {
            return
        }
        tryCatching {
            if (mImageWriter != null) {
                val image = try {
                    mImageWriter?.dequeueInputImage()
                } catch (e: IllegalStateException) {
                    Log.e("AudioCallback", "image writer ex:${e.message}")
                    releaseImageWriter()
                    return@tryCatching
                }
                image?.let {
                    ImageUtil.mapAudioToYUV420(data, it)
                    try {
                        mImageWriter?.queueInputImage(it)
                    } catch (e: IllegalStateException) {
                        Log.e("AudioCallback", "Failed to queue input image: ${e.message}")
                        releaseImageWriter()
                    }
                }
            } else {
                Log.e("AudioCallback", "image writer is null")
            }
        }
    }

    override fun onSurface(surface: Surface?) {
        releaseImageWriter()
        tryCatching {
            if (surface != null) {
                mImageWriter = ImageWriter.newInstance(surface, 1)
            }
        }
    }

    private fun releaseImageWriter() {
        mImageWriter?.close()
        mImageWriter = null
    }
}

获取到的surface可以得到ImageWriter对象,在通过ImageReader将主进程的buffer写入到句柄中,因创建的是图片格式,因此需要将对应音频转换写入,

kotlin 复制代码
internal object ImageUtil {
    private const val TAG = "ImageUtil"

    private const val WIDTH = 640
    private const val HEIGHT = 480
    private const val UV_SIZE = WIDTH * HEIGHT / 4 // YUV_420_888 的 UV 平面尺寸是 Y 平面的一半

    fun mapAudioToYUV420(buffer: ByteArray, image: Image) {
        val shorts = ShortArray(buffer.size / 2)
        ByteBuffer.wrap(buffer).order(ByteOrder.LITTLE_ENDIAN).asShortBuffer()[shorts]

        val planes = image.planes
        val yBuffer = planes[0].buffer
        for (i in shorts.indices) {
            val highByte = (shorts[i].toInt() shr 8 and 0xFF).toByte()
            yBuffer.put(highByte)
        }
        val uBuffer = planes[1].buffer
        val vBuffer = planes[2].buffer
        for (i in 0 until UV_SIZE) {
            uBuffer.put(127.toByte()) // U 分量
            vBuffer.put(127.toByte()) // V 分量
        }
    }
}

图片格式是yuv420,那么就将原始音频数据大小转为shortBuffer写入y分量,u、v分量默认127size填充,YUV420格式具体详情可以查看其他资料分析存储结构,这样就能把每次音频buffer填充到image中,在子进程通过同样格式去读取音频数据,

kotlin 复制代码
private fun extractAudioFromYUV420(image: Image, originalLength: Int): ByteArray {
	val planes = image.planes
	val yBuffer = planes[0].buffer
	val audioData = ShortArray(originalLength)
	for (i in 0 until originalLength) {
		val highByte = yBuffer.get()
		audioData[i] = (highByte.toInt() shl 8).toShort()
	}
	val byteBuffer = ByteBuffer.allocate(audioData.size * 2).order(ByteOrder.LITTLE_ENDIAN)
	val shortBuffer = byteBuffer.asShortBuffer()
	shortBuffer.put(audioData)
	return byteBuffer.array()
}

如上通过image解析获取到的音频数据,其中originalLength是原生音频size大小,读取y分量数据解析后就是对应的音频原始数据,转换成对应byteArray,在子进程中拿到原始音频数据后,那么就可以对应处理音频识别,将识别后的结果通过aidl传递给主进程处理。

结尾

通过surface句柄就完成跨进程传递音视频数据,在子进程中处理数据后,最终将结果在返回给主进程,子进程异常情况下也不会去影响主进程流程,且在实时率&高效处理下丝毫不差在主进程中处理。

相关推荐
鲤籽鲲4 小时前
C# 内置值类型
android·java·c#
工程师老罗4 小时前
我用AI学Android Jetpack Compose之Kotlin篇
android·kotlin·android jetpack
工程师老罗4 小时前
我用AI学Android Jetpack Compose之入门篇(2)
android·android jetpack
工程师老罗9 小时前
我用AI学Android Jetpack Compose之理解声明式UI
android·ui·android jetpack
锋风Fengfeng10 小时前
安卓Activity执行finish后onNewIntent也执行了
android
tmacfrank11 小时前
Jetpack Compose 学习笔记(四)—— CompositionLocal 与主题
android·kotlin·android jetpack
且随疾风前行.11 小时前
重学 Android 自定义 View 系列(十):带指针的渐变环形进度条
android
网安墨雨12 小时前
[网络安全]DVWA之File Upload—AntSword(蚁剑)攻击姿势及解题详析合集
android·安全·web安全
Clockwiseee12 小时前
文件上传题目练习
android·服务器·安全·网络安全
_明川13 小时前
Android 性能优化:内存优化(实践篇)
android·性能优化