Camera2整体架构

Camera2整体架构

前言

本节讨论

  • camera2实现简单的相机拍摄
  • camera2核心原理

简单拍摄实现步骤

  • 根据你的配置,找到你需要的相机id
  • 打开相机,这个步骤是一部的,有一个callback的异步回调
  • 当相机成功打开后,客户端需要与相机端建立连接即createSession
  • Session创建完成后,就可以开启一些定义好的模版请求了。比如Preview(预览)、拍照(Capture),拍摄视频(Record)等。

具体demo请参考camera-samples/Camera2Basic at main · android/camera-samples (github.com)

理解拍照步骤

  • 整体是pipline的模型,也可以理解为IO模型,管道嘛有一个输入一个输出。且会有一个无限循环在取流中的buffer

  • cameraManager相对固定,比较简单。其中open carmera会指定相机id,调用完后其实只打开了IO中的I即输入, 输出需要通过CameraDevice打开。

  • CameraDevice先创建session去打开管道的输出端(耗时操作,需要异步回调),surface便是现实相机采集的buffer数据的地方, 我们看下函数的签名,就比较清晰了。从下面代码看出可以接受多个surface

    kotlin 复制代码
    public abstract void createCaptureSession(@NonNull List<Surface> outputs,
            @NonNull CameraCaptureSession.StateCallback callback, @Nullable Handler handler)
            throws CameraAccessException;

通过上述模型,我们不禁想问三个问题

  • 如何打开输入端
  • 如何打开输出端
  • 输入端和输出端是如何引用,如何连接的。

所以我们来看看camera2的核心原理

Camera2核心原理

图解

从顶层角度看看其原理

  • openCamera这个函数,主要是初始化CameraDevice和一个无限取流的轮询线程RequestThread。这个线程会检查是否有CaptureRequest。 没有则一直轮询,直到CaptureRequest build。
  • 在openCamera执行后,需要打开输入流和输出流,这个功能交给createCaputureSession函数,在底层是通过第一步中创建的核心类Camera3Device来打开输入输出流。
  • createCaputureSession执行后,执行最后一步setRepeatRequest,这是一个模版请求,即请求预览流,拍照流,录屏流。底层会通过模版id做转换。创建完成后,再RequestThreadprepareHalRequest函数中取流的buffer。

下面我们再从源码的角度看看:

OpenCamera

该部分是打开camera的输入流

Camera3Device.cpp → initializeCommonLocked

kotlin 复制代码
status_t Camera3Device::initializeCommonLocked() {

    /** Start up status tracker thread */
    mStatusTracker = new StatusTracker(this);
    status_t res = mStatusTracker->run(String8::format("C3Dev-%s-Status", mId.string()).string());
  

    /** Register in-flight map to the status tracker */
    mInFlightStatusId = mStatusTracker->addComponent("InflightRequests");

    /** Create buffer manager */
    mBufferManager = new Camera3BufferManager();

    /** Start up request queue thread */
    **mRequestThread = createNewRequestThread(
            this, mStatusTracker, mInterface, sessionParamKeys,
            mUseHalBufManager, mSupportCameraMute, mOverrideToPortrait);**

    res = mRequestThread->run(String8::format("C3Dev-%s-ReqQueue", mId.string()).string());
    if (res != OK) {
        SET_ERR_L("Unable to start request queue thread: %s (%d)",
                strerror(-res), res);
        mInterface->close();
        mRequestThread.clear();
        return res;
    }

    **mPreparerThread = new PreparerThread();**

    internalUpdateStatusLocked(STATUS_UNCONFIGURED);
    mNextStreamId = 0;
    mFakeStreamId = NO_STREAM;
    mNeedConfig = true;
    mPauseStateNotify = false;
    mIsInputStreamMultiResolution = false;

    // Measure the clock domain offset between camera and video/hw_composer
    mTimestampOffset = getMonoToBoottimeOffset();
    camera_metadata_entry timestampSource =
            mDeviceInfo.find(ANDROID_SENSOR_INFO_TIMESTAMP_SOURCE);
    if (timestampSource.count > 0 && timestampSource.data.u8[0] ==
            ANDROID_SENSOR_INFO_TIMESTAMP_SOURCE_REALTIME) {
        mDeviceTimeBaseIsRealtime = true;
    }

    return OK;
}
  • 这个是opencamera调用后,再c++层对Camera3Device的初始化
  • 内部做了几件事
    • 初始化取buffer的线程RequestThread
    • 初始化了buffer准备线程

Camera3Device.cpp → RequestThread::threadLoop()

kotlin 复制代码
**bool Camera3Device::RequestThread::threadLoop()** {
    ATRACE_CALL();
    status_t res;
    // Any function called from threadLoop() must not hold mInterfaceLock since
    // it could lead to deadlocks (disconnect() -> hold mInterfaceMutex -> wait for request thread
    // to finish -> request thread waits on mInterfaceMutex) http://b/143513518

    // Handle paused state.
    if (waitIfPaused()) {
        return true;
    }

    // Wait for the next batch of requests.
    waitForNextRequestBatch();
    if (mNextRequests.size() == 0) {
        return true;
    }

    // Prepare a batch of HAL requests and output buffers.
    **res = prepareHalRequests();**
   
    // Submit a batch of requests to HAL.
    // Use flush lock only when submitting multilple requests in a batch.
    // TODO: The problem with flush lock is flush() will be blocked by process_capture_request()
    // which may take a long time to finish so synchronizing flush() and
    // process_capture_request() defeats the purpose of cancelling requests ASAP with flush().
    // For now, only synchronize for high speed recording and we should figure something out for
    // removing the synchronization.
    bool useFlushLock = mNextRequests.size() > 1;

    if (useFlushLock) {
        mFlushLock.lock();
    }

    ALOGVV("%s: %d: submitting %zu requests in a batch.", __FUNCTION__, __LINE__,
            mNextRequests.size());

    sp<Camera3Device> parent = mParent.promote();
    if (parent != nullptr) {
        parent->mRequestBufferSM.onSubmittingRequest();
    }

    bool submitRequestSuccess = false;
    nsecs_t tRequestStart = systemTime(SYSTEM_TIME_MONOTONIC);
    submitRequestSuccess = sendRequestsBatch();

    nsecs_t tRequestEnd = systemTime(SYSTEM_TIME_MONOTONIC);
    mRequestLatency.add(tRequestStart, tRequestEnd);

    if (useFlushLock) {
        mFlushLock.unlock();
    }

    // Unset as current request
    {
        Mutex::Autolock l(mRequestLock);
        mNextRequests.clear();
    }
    mRequestSubmittedSignal.signal();

    return submitRequestSuccess;
}
  1. threadloop 是在循环里面,循环的实现在system/core/libutils/Threads.cpp
  2. waitForNextRequestBatch 等待用户请求
  3. prepareHalRequests 从用户请求里获取request,转换成halrequest
  4. 2,3点其实就是等待用户调用setRepeatRequest
  5. 来看看sendRequestsBatch做了些什么事

Camera3Device.cpp → sendRequestsBatch

java 复制代码
bool Camera3Device::RequestThread::sendRequestsBatch() {
    ATRACE_CALL();
    status_t res;
    size_t batchSize = mNextRequests.size();
    std::vector<camera_capture_request_t*> requests(batchSize);
    uint32_t numRequestProcessed = 0;
    for (size_t i = 0; i < batchSize; i++) {
        requests[i] = &mNextRequests.editItemAt(i).halRequest;
        ATRACE_ASYNC_BEGIN("frame capture", mNextRequests[i].halRequest.frame_number);
    }

    res = mInterface->processBatchCaptureRequests(requests, &numRequestProcessed);

    bool triggerRemoveFailed = false;
    NextRequest& triggerFailedRequest = mNextRequests.editItemAt(0);
    for (size_t i = 0; i < numRequestProcessed; i++) {
        NextRequest& nextRequest = mNextRequests.editItemAt(i);
        nextRequest.submitted = true;

        updateNextRequest(nextRequest);

        if (!triggerRemoveFailed) {
            // Remove any previously queued triggers (after unlock)
            status_t removeTriggerRes = removeTriggers(mPrevRequest);
            if (removeTriggerRes != OK) {
                triggerRemoveFailed = true;
                triggerFailedRequest = nextRequest;
            }
        }
    }

    if (triggerRemoveFailed) {
        SET_ERR("RequestThread: Unable to remove triggers "
              "(capture request %d, HAL device: %s (%d)",
              triggerFailedRequest.halRequest.frame_number, strerror(-res), res);
        cleanUpFailedRequests(/*sendRequestError*/ false);
        return false;
    }

    if (res != OK) {
        // Should only get a failure here for malformed requests or device-level
        // errors, so consider all errors fatal.  Bad metadata failures should
        // come through notify.
        SET_ERR("RequestThread: Unable to submit capture request %d to HAL device: %s (%d)",
                mNextRequests[numRequestProcessed].halRequest.frame_number,
                strerror(-res), res);
        cleanUpFailedRequests(/*sendRequestError*/ false);
        return false;
    }
    return true;
}
  • 转了一下request么, 用kotlin语言表达:requests = mNextRequest.map { it.halRequest}

  • requests交给processBatchCaptureRequests

  • mInterface的类型是sp<[HalInterface](https://cs.android.com/android/platform/superproject/main/+/main:frameworks/av/services/camera/libcameraservice/device3/Camera3Device.h;drc=f73ecbab632015390459f04ed847378e98ae6df8;bpv=0;bpt=1;l=365)>, 有两个实现AidlCamera3Device::AidlHalInterfaceHidlCamera3Device::HidlHalInterface。这里我们看第二个. 如下代码 就是直接传给了HAL层处理

    java 复制代码
    if (hidlSession_3_7 != nullptr) {
            err = hidlSession_3_7->processCaptureRequest_3_7(captureRequests_3_7, cachesToRemove,
                                                             resultCallback);
        } else if (hidlSession_3_4 != nullptr) {
            err = hidlSession_3_4->processCaptureRequest_3_4(captureRequests_3_4, cachesToRemove,
                                                             resultCallback);
        } else {
            err = mHidlSession->processCaptureRequest(captureRequests, cachesToRemove,
                                                      resultCallback);
        }
  • processCaptureRequest的处理结果会通过[CameraDeviceSession](https://cs.android.com/android/platform/superproject/main/+/main:hardware/interfaces/camera/device/3.2/default/CameraDeviceSession.h;drc=f73ecbab632015390459f04ed847378e98ae6df8;bpv=1;bpt=1;l=74)::[ResultBatcher](https://cs.android.com/android/platform/superproject/main/+/main:hardware/interfaces/camera/device/3.2/default/CameraDeviceSession.h;drc=f73ecbab632015390459f04ed847378e98ae6df8;bpv=1;bpt=1;l=187)::[processCaptureResult](https://cs.android.com/android/platform/superproject/main/+/main:hardware/interfaces/camera/device/3.2/default/CameraDeviceSession.cpp;drc=f73ecbab632015390459f04ed847378e98ae6df8;bpv=1;bpt=1;l=762?gsn=processCaptureResult&gs=KYTHE%3A%2F%2Fkythe%3A%2F%2Fandroid.googlesource.com%2Fplatform%2Fsuperproject%2Fmain%2F%2Fmain%3Flang%3Dc%252B%252B%3Fpath%3Dhardware%2Finterfaces%2Fcamera%2Fdevice%2F3.2%2Fdefault%2FCameraDeviceSession.cpp%23sZxOW6VPbDCfPMIHtyauNpMSq4DFupQ74tMbCP_SwAg) 返回

到此open camera的流程分析完了,下面来看看createCaptureSession 是如何打开输入,输出流的。

createCaptureSession的过程

总得来说,核心步骤也是Camera3DeviceApp → createStream。 顾明思议 createCaptureSession的过程就是构造流的过程

CameraDeviceImpl → configureStreamsChecked

这个函数做了几件事

  • check 配置
  • 再inputConfig不为null的情况下, 创建了外部的输入流。其主要应用场景是滤镜等。
  • 和输出流

Camera3Device.cpp → createInputStream

创建输入流的应用场景主要是AR, 滤镜,贴纸等复杂场景

Camera3Device.cpp → createStream

java 复制代码
status_t Camera3Device::createStream(sp<Surface> consumer,
            uint32_t width, uint32_t height, int format,
            android_dataspace dataSpace, camera_stream_rotation_t rotation, int *id,
            const std::string& physicalCameraId,
            const std::unordered_set<int32_t> &sensorPixelModesUsed,
            std::vector<int> *surfaceIds, int streamSetId, bool isShared, bool isMultiResolution,
            uint64_t consumerUsage, int64_t dynamicRangeProfile, int64_t streamUseCase,
            int timestampBase, int mirrorMode) {
    ATRACE_CALL();

    if (consumer == nullptr) {
        ALOGE("%s: consumer must not be null", __FUNCTION__);
        return BAD_VALUE;
    }

    std::vector<sp<Surface>> consumers;
    consumers.push_back(consumer);
		// 调用了一个类里面的同名函数
    return createStream(consumers, /*hasDeferredConsumer*/ false, width, height,
            format, dataSpace, rotation, id, physicalCameraId, sensorPixelModesUsed, surfaceIds,
            streamSetId, isShared, isMultiResolution, consumerUsage, dynamicRangeProfile,
            streamUseCase, timestampBase, mirrorMode);
}

status_t Camera3Device::createStream(const std::vector<sp<Surface>>& consumers,
        bool hasDeferredConsumer, uint32_t width, uint32_t height, int format,
        android_dataspace dataSpace, camera_stream_rotation_t rotation, int *id,
        const std::string& physicalCameraId,
        const std::unordered_set<int32_t> &sensorPixelModesUsed,
        std::vector<int> *surfaceIds, int streamSetId, bool isShared, bool isMultiResolution,
        uint64_t consumerUsage, int64_t dynamicRangeProfile, int64_t streamUseCase,
        int timestampBase, int mirrorMode){
		
		if (format == HAL_PIXEL_FORMAT_BLOB) {
	      // ...
        newStream = new Camera3OutputStream(mNextStreamId, consumers[0],
                width, height, blobBufferSize, format, dataSpace, rotation,
                mTimestampOffset, physicalCameraId, sensorPixelModesUsed, transport, streamSetId,
                isMultiResolution, dynamicRangeProfile, streamUseCase, mDeviceTimeBaseIsRealtime,
                timestampBase, mirrorMode);
    } else if (format == HAL_PIXEL_FORMAT_RAW_OPAQUE) {
        // ...
        newStream = new Camera3OutputStream(mNextStreamId, consumers[0],
                width, height, rawOpaqueBufferSize, format, dataSpace, rotation,
                mTimestampOffset, physicalCameraId, sensorPixelModesUsed, transport, streamSetId,
                isMultiResolution, dynamicRangeProfile, streamUseCase, mDeviceTimeBaseIsRealtime,
                timestampBase, mirrorMode);
    } else if (isShared) {
        newStream = new Camera3SharedOutputStream(mNextStreamId, consumers,
                width, height, format, consumerUsage, dataSpace, rotation,
                mTimestampOffset, physicalCameraId, sensorPixelModesUsed, transport, streamSetId,
                mUseHalBufManager, dynamicRangeProfile, streamUseCase, mDeviceTimeBaseIsRealtime,
                timestampBase, mirrorMode);
    } else if (consumers.size() == 0 && hasDeferredConsumer) {
        newStream = new Camera3OutputStream(mNextStreamId,
                width, height, format, consumerUsage, dataSpace, rotation,
                mTimestampOffset, physicalCameraId, sensorPixelModesUsed, transport, streamSetId,
                isMultiResolution, dynamicRangeProfile, streamUseCase, mDeviceTimeBaseIsRealtime,
                timestampBase, mirrorMode);
    } else {
        newStream = new Camera3OutputStream(mNextStreamId, consumers[0],
                width, height, format, dataSpace, rotation,
                mTimestampOffset, physicalCameraId, sensorPixelModesUsed, transport, streamSetId,
                isMultiResolution, dynamicRangeProfile, streamUseCase, mDeviceTimeBaseIsRealtime,
                timestampBase, mirrorMode);
    }

		newStream->setStatusTracker(mStatusTracker);

    newStream->setBufferManager(mBufferManager);

    newStream->setImageDumpMask(mImageDumpMask);

    res = mOutputStreams.add(mNextStreamId, newStream);
		//...
		return OK
}
  • stream的实际类型是Camera3OutputStream
  • consumer (surface在native的引用)交给了Camera3OutputStream
  • 构造的新流newStream,放到mOutputStreams

到此创建流的步骤就完成了,setRepeatRequest 比较简单,忽略一下。所以 可以做个总结了

总结

简单总结如图

  • output stream 从bufferqueue里获取buffer,交给hal,hal填充buffer
  • output stream根据fps,固定频率取buffer。然后queue到bufferqueue,交给surfaceflinger渲染
相关推荐
崔庆才丨静觅1 小时前
hCaptcha 验证码图像识别 API 对接教程
前端
passerby60612 小时前
完成前端时间处理的另一块版图
前端·github·web components
掘了2 小时前
「2025 年终总结」在所有失去的人中,我最怀念我自己
前端·后端·年终总结
崔庆才丨静觅2 小时前
实用免费的 Short URL 短链接 API 对接说明
前端
崔庆才丨静觅2 小时前
5分钟快速搭建 AI 平台并用它赚钱!
前端
崔庆才丨静觅3 小时前
比官方便宜一半以上!Midjourney API 申请及使用
前端
Moment3 小时前
富文本编辑器在 AI 时代为什么这么受欢迎
前端·javascript·后端
崔庆才丨静觅3 小时前
刷屏全网的“nano-banana”API接入指南!0.1元/张量产高清创意图,开发者必藏
前端
剪刀石头布啊3 小时前
jwt介绍
前端
yunteng5213 小时前
通用架构(同城双活)(单点接入)
架构·同城双活·单点接入