Android camera2

一、序言

复制代码
为了对阶段性的知识积累、方便以后调查问题,特做此文档!
将以camera app 使用camera2 api进行分析。
(1)、打开相机 openCamera
(2)、创建会话 createCaptureSession
(3)、开始预览 setRepeatingRequest
(4)、停止预览 stopRepeating
(5)、关闭相机 closeCamera

二、打开相机

1、camera app在使用camera2 api的第一步就是调用cameraManager的openCamera接口此接口有三个参数

2、调用时序

2.1 调用openCamera接口

2.2 内部调用到openCameraDeviceUserAsync

调用内部方法getCameraCharacteristics查询camera设备的相关属性,这里会通过binder调用到CameraService(c++), 主要是camera 的前后朝向、图像显示角度(0度、90度、180度、270度)

new CameraDeviceImpl 的时候将一些变量保存在该对象内,这个CameraDeviceImpl会返回一个CameraDeviceCallbacks后面会将这个callback通过CameraService的接口connect()给到CameraDeviceClientBase, 后续不管是预览、录制、拍照存在帧相关的error都会通过CameraDeviceCallbacks --> CaptureCallback
接着去new client
走到makeClient的方法里面,会根据camera api的版本选择new 不同的client

new CameraDeviceClient -> new Camera2ClientBase -> new CameraDeviceClientBase() -> new BasicClient(), 在Camera2ClientBase的构造函数里面还会new Camera3Device


等new client结束后就会去调用initialize

CameraDeviceClient::initialize() -> Camera2ClientBase::initialize() -> Camera2ClientBase::initializeImpl() -> Camera3Device::initialize -> CameraProviderManager::openSession, CameraProviderManager类可以理解为hal camera的代理类



Camera3Device持有CameraProviderManager对象,而CameraProviderManager可以理解native和hal camera直接的桥梁,它会枚举出camera provider以及这些provider提供的摄像头设备,并提供方法去访问它们.

这里必须要插播一下CameraProviderManager的实例化和初始化了,如下图在CameraService 被new的时候就会自动走进onFirstRef(), 然后在该方法里面去new CameraProviderManager,紧接着调用mCameraProviderManager的initialize()方法,initialize()第二个参数是默认参数,直接赋值了HardwareServiceInteractionProxy的对象sHardwareServiceInteractionProxy, HardwareServiceInteractionProxy有两个方法,一个是registerForNotifications(), 一个是getService(), 这里getService()会拿到进程[email protected]中CameraProvider类的远端代理,而CameraProvider中的initialize()方法在CameraProvider的结构体中会被调用,在CameraProvider::initialize() 会去通过hw_get_module()方法打开动态库(camera*.so), 然后CameraProvider::

CameraModule中的camera_module_t *mModule 持有这个真正实现操作camera device的so句柄。



接着看addDevice方法



好了插播到此结束,接着上面的2.15 deviceInfo3->mInterface->open, mInterface就是CameraDevice,并且持有CameraModule,那么这里就会调用到CameraModule::open,紧接着调用到hal层真正实现者v4l2_camera_hal中的open




三、创建会话

1、在opencamera成功后,就会返回一个对象CameraDeviceImpl,后面在创建的会话和请求预览的时候都会调用该对象的方法。

2、创建会话会调用CameraDeviceImpl::createCaptureSession,分别有三个参数

3、创建会话代码流程分析

frameworks/base/core/java/android/hardware/camera2/impl/CameraDeviceImpl.java

cpp 复制代码
    // 当open camera成功之后,就会返回CameraDeviceImpl对象,然后在opened回调里面去创建会话,
    // 需要传进来两个surface,一个用于拍照,一个用于预览
    public void createCaptureSession(List<Surface> outputs,
            CameraCaptureSession.StateCallback callback, Handler handler)
            throws CameraAccessException {
        List<OutputConfiguration> outConfigurations = new ArrayList<>(outputs.size());
        for (Surface surface : outputs) {
            outConfigurations.add(new OutputConfiguration(surface));
        }
        createCaptureSessionInternal(null, outConfigurations, callback,
                checkAndWrapHandler(handler), /*operatingMode*/ICameraDeviceUser.NORMAL_MODE,
                /*sessionParams*/ null);
    }

frameworks/base/core/java/android/hardware/camera2/params/OutputConfiguration.java

cpp 复制代码
    public OutputConfiguration(@NonNull Surface surface) {
        this(SURFACE_GROUP_ID_NONE, surface, ROTATION_0);
    }


    
    @SystemApi
    public OutputConfiguration(int surfaceGroupId, @NonNull Surface surface, int rotation) {
        checkNotNull(surface, "Surface must not be null");
        checkArgumentInRange(rotation, ROTATION_0, ROTATION_270, "Rotation constant");
        mSurfaceGroupId = surfaceGroupId;
        mSurfaceType = SURFACE_TYPE_UNKNOWN;
        mSurfaces = new ArrayList<Surface>(); // 有效值其实只有surface
        mSurfaces.add(surface);
        mRotation = rotation;
        mConfiguredSize = SurfaceUtils.getSurfaceSize(surface);
        mConfiguredFormat = SurfaceUtils.getSurfaceFormat(surface);
        mConfiguredDataspace = SurfaceUtils.getSurfaceDataspace(surface);
        mConfiguredGenerationId = surface.getGenerationId();
        mIsDeferredConfig = false;
        mIsShared = false;
        mPhysicalCameraId = null;
        mIsMultiResolution = false;
        mSensorPixelModesUsed = new ArrayList<Integer>();
    }

frameworks/base/core/java/android/hardware/camera2/impl/CameraDeviceImpl.java

紧接着上面的createCaptureSessionInternal

cpp 复制代码
private void createCaptureSessionInternal(InputConfiguration inputConfig,
            List<OutputConfiguration> outputConfigurations,
            CameraCaptureSession.StateCallback callback, Executor executor,
            int operatingMode, CaptureRequest sessionParams) throws CameraAccessException {
            .......
            try {
                // configure streams and then block until IDLE
                // 这里开始创建streams
                configureSuccess = configureStreamsChecked(inputConfig, outputConfigurations,
                        operatingMode, sessionParams, createSessionStartTime);
                if (configureSuccess == true && inputConfig != null) {
                    input = mRemoteDevice.getInputSurface();
                }
            } catch (CameraAccessException e) {
                configureSuccess = false;
                pendingException = e;
                input = null;
                if (DEBUG) {
                    Log.v(TAG, "createCaptureSession - failed with exception ", e);
                }
            }

            // Fire onConfigured if configureOutputs succeeded, fire onConfigureFailed otherwise.
            CameraCaptureSessionCore newSession = null;
            if (isConstrainedHighSpeed) {
                ArrayList<Surface> surfaces = new ArrayList<>(outputConfigurations.size());
                for (OutputConfiguration outConfig : outputConfigurations) {
                    surfaces.add(outConfig.getSurface());
                }
                StreamConfigurationMap config =
                    getCharacteristics().get(CameraCharacteristics.SCALER_STREAM_CONFIGURATION_MAP);
                SurfaceUtils.checkConstrainedHighSpeedSurfaces(surfaces, /*fpsRange*/null, config);

                newSession = new CameraConstrainedHighSpeedCaptureSessionImpl(mNextSessionId++,
                        callback, executor, this, mDeviceExecutor, configureSuccess,
                        mCharacteristics);
            } else {
                // 上面创建了stream之后就会去创建session, 后面又用session对象去请求预览camera
                // 创建session就是直接一个java的对象
                newSession = new CamerraCaptureSessionImpl(mNextSessionId++, input,
                        callback, executor, this, mDeviceExecutor, configureSuccess);
            }

            // TODO: wait until current session closes, then create the new session
            mCurrentSession = newSession;

            if (pendingException != null) {
                throw pendingException;
            }

            mSessionStateCallback = mCurrentSession.getDeviceStateCallback();
        }
    }

构建Streams

cpp 复制代码
 public boolean configureStreamsChecked(InputConfiguration inputConfig,
            List<OutputConfiguration> outputs, int operatingMode, CaptureRequest sessionParams,
            long createSessionStartTime)
                    throws CameraAccessException {
        // Treat a null input the same an empty list
        if (outputs == null) {
            outputs = new ArrayList<OutputConfiguration>();
        }
        if (outputs.size() == 0 && inputConfig != null) {
            throw new IllegalArgumentException("cannot configure an input stream without " +
                    "any output streams");
        }
        
        // 传进来的inputConfig就是 null
        checkInputConfiguration(inputConfig);

        boolean success = false;

        synchronized(mInterfaceLock) {
            checkIfCameraClosedOrInError();
            // Streams to create
            HashSet<OutputConfiguration> addSet = new HashSet<OutputConfiguration>(outputs);
            // Streams to delete
            List<Integer> deleteList = new ArrayList<Integer>();

            // Determine which streams need to be created, which to be deleted
            // 这里新创建的CameraDeviceImpl 对象 mConfiguredOutputs.size() 都是0
            for (int i = 0; i < mConfiguredOutputs.size(); ++i) {
                int streamId = mConfiguredOutputs.keyAt(i);
                OutputConfiguration outConfig = mConfiguredOutputs.valueAt(i);

                if (!outputs.contains(outConfig) || outConfig.isDeferredConfiguration()) {
                    // Always delete the deferred output configuration when the session
                    // is created, as the deferred output configuration doesn't have unique surface
                    // related identifies.
                    deleteList.add(streamId);
                } else {
                    addSet.remove(outConfig);  // Don't create a stream previously created
                }
            }

            mDeviceExecutor.execute(mCallOnBusy);
            // 这里新创建的CameraDeviceImpl 对象, mRepeatingRequestId == REQUEST_ID_NONE , 
            // stopRepeating()等于什么都没有做
            stopRepeating();

            try {
                waitUntilIdle();
                // 调到 CameraDeviceClient::beginConfigure 中这里是个空实现,什么都没有做
                mRemoteDevice.beginConfigure();

                ......
                // Delete all streams first (to free up HW resources)
                for (Integer streamId : deleteList) {
                    mRemoteDevice.deleteStream(streamId);
                    mConfiguredOutputs.delete(streamId);
                }

                // Add all new streams
                // 根据新的OutputConfiguration创建stream
                for (OutputConfiguration outConfig : outputs) {`
                    if (addSet.contains(outConfig)) {
                       // 这里创建了Camera3OutputStream
                        int streamId = mRemoteDevice.createStream(outConfig);
                        mConfiguredOutputs.put(streamId, outConfig);
                    }
                }

                int offlineStreamIds[];
                if (sessionParams != null) {
                    // 这里将stream的一些配置给到hal层,最终会调用到V4L2Camera::setupStreams
                    offlineStreamIds = mRemoteDevice.endConfigure(operatingMode,
                            sessionParams.getNativeCopy(), createSessionStartTime);
                } else {
                    offlineStreamIds = mRemoteDevice.endConfigure(operatingMode, null,
                            createSessionStartTime);
                }
                ......
                success = true;
            } catch (IllegalArgumentException e) {
                // OK. camera service can reject stream config if it's not supported by HAL
                // This is only the result of a programmer misusing the camera2 api.
                Log.w(TAG, "Stream configuration failed due to: " + e.getMessage());
                return false;
            } catch (CameraAccessException e) {
                if (e.getReason() == CameraAccessException.CAMERA_IN_USE) {
                    throw new IllegalStateException("The camera is currently busy." +
                            " You must wait until the previous operation completes.", e);
                }
                throw e;
            } finally {
                if (success && outputs.size() > 0) {
                    mDeviceExecutor.execute(mCallOnIdle);
                } else {
                    // Always return to the 'unconfigured' state if we didn't hit a fatal error
                    mDeviceExecutor.execute(mCallOnUnconfigured);
                }
            }
        }

        return success;
    }

frameworks/av/services/camera/libcameraservice/api2/CameraDeviceClient

根据outputConfiguration去创建stream

cpp 复制代码
binder::Status CameraDeviceClient::createStream(
        const hardware::camera2::params::OutputConfiguration &outputConfiguration,
        /*out*/
        int32_t* newStreamId) {
    ATRACE_CALL();

    binder::Status res;
    if (!(res = checkPidStatus(__FUNCTION__)).isOk()) return res;

    Mutex::Autolock icl(mBinderSerializationLock);

    // outputConfiguration是持有surface, surface中持有graphicBufferProducer,这个producer就是
    // 给到camera hal去填充camera数据的
    const std::vector<sp<IGraphicBufferProducer>>& bufferProducers =
            outputConfiguration.getGraphicBufferProducers();
    size_t numBufferProducers = bufferProducers.size();
    bool deferredConsumer = outputConfiguration.isDeferred();
    bool isShared = outputConfiguration.isShared();
    String8 physicalCameraId = String8(outputConfiguration.getPhysicalCameraId());
    bool deferredConsumerOnly = deferredConsumer && numBufferProducers == 0;
    bool isMultiResolution = outputConfiguration.isMultiResolution();
    ..........
    std::vector<sp<Surface>> surfaces;
    std::vector<sp<IBinder>> binders;
    ..........
    OutputStreamInfo streamInfo;
    bool isStreamInfoValid = false;
    const std::vector<int32_t> &sensorPixelModesUsed =
            outputConfiguration.getSensorPixelModesUsed();
    for (auto& bufferProducer : bufferProducers) {
        // Don't create multiple streams for the same target surface
        // 一个stream 可以对应多个surface,但是一个surface只能对应一个stream
        sp<IBinder> binder = IInterface::asBinder(bufferProducer);
        ssize_t index = mStreamMap.indexOfKey(binder);
        // if 走进去说明这个surface 已经对应了 stream,之前已经保存在了mStreamMap中
        if (index != NAME_NOT_FOUND) {
            String8 msg = String8::format("Camera %s: Surface already has a stream created for it "
                    "(ID %zd)", mCameraIdStr.string(), index);
            ALOGW("%s: %s", __FUNCTION__, msg.string());
            return STATUS_ERROR(CameraService::ERROR_ALREADY_EXISTS, msg.string());
        }

        // 根据前面app申请的surface,获取其中的bufferProducer,然后去创建streamInfo和 c++ surface,
        // 并返回
        sp<Surface> surface;
        res = SessionConfigurationUtils::createSurfaceFromGbp(streamInfo,
                isStreamInfoValid, surface, bufferProducer, mCameraIdStr,
                mDevice->infoPhysical(physicalCameraId), sensorPixelModesUsed);
        .......
        binders.push_back(IInterface::asBinder(bufferProducer));
        surfaces.push_back(surface);
    }

    // If mOverrideForPerfClass is true, do not fail createStream() for small
    // JPEG sizes because existing createSurfaceFromGbp() logic will find the
    // closest possible supported size.

    int streamId = camera3::CAMERA3_STREAM_ID_INVALID;
    std::vector<int> surfaceIds;
      ......
    } else {
       // 这里走到Camera3Device中,注意这里的streamId、surfaceIds是要在createStream进行赋值的,而surfaces给到Camera3OutputStream
       // 中,createStream 创建一个Camera3OutputStream 就会streamId++
        err = mDevice->createStream(surfaces, deferredConsumer, streamInfo.width,
                streamInfo.height, streamInfo.format, streamInfo.dataSpace,
                static_cast<camera_stream_rotation_t>(outputConfiguration.getRotation()),
                &streamId, physicalCameraId, streamInfo.sensorPixelModesUsed, &surfaceIds,
                outputConfiguration.getSurfaceSetID(), isShared, isMultiResolution);
    }

    if (err != OK) {
       ......
    } else {
        int i = 0;
        // binders 中对应的是每个surface的bufferProducer,即可以理解binder中存在的就是surface
        for (auto& binder : binders) {
            ALOGV("%s: mStreamMap add binder %p streamId %d, surfaceId %d",
                    __FUNCTION__, binder.get(), streamId, i);
            // streamMap 一个surface的bufferProducer, 对应一个StreamSurfaceId对象,
            // 该对象又存一个streamId和对应的surfaceIds[i], 这个streamId就是创建第几个
            // Camera3OutputStream,surfaceIds[i] 存在的都是0;
            mStreamMap.add(binder, StreamSurfaceId(streamId, surfaceIds[i]));
            i++;
        }
        
        // 一个streamId对应一个outputConfiguration
        mConfiguredOutputs.add(streamId, outputConfiguration);
        // 每个Camera3OutputStream 和 streamInfo对应关系
        mStreamInfoMap[streamId] = streamInfo;

        ALOGV("%s: Camera %s: Successfully created a new stream ID %d for output surface"
                    " (%d x %d) with format 0x%x.",
                  __FUNCTION__, mCameraIdStr.string(), streamId, streamInfo.width,
                  streamInfo.height, streamInfo.format);

        // Set transform flags to ensure preview to be rotated correctly.
        res = setStreamTransformLocked(streamId);

        // Fill in mHighResolutionCameraIdToStreamIdSet map
        const String8 &cameraIdUsed =
                physicalCameraId.size() != 0 ? physicalCameraId : mCameraIdStr;
        const char *cameraIdUsedCStr = cameraIdUsed.string();
        // Only needed for high resolution sensors
        if (mHighResolutionSensors.find(cameraIdUsedCStr) !=
                mHighResolutionSensors.end()) {
            mHighResolutionCameraIdToStreamIdSet[cameraIdUsedCStr].insert(streamId);
        }

        // 这个返回给app
        *newStreamId = streamId;
    }

    return res;
}

frameworks/av/services/camera/libcameraservice/utils/SessionConfigurationUtils.cpp

cpp 复制代码
// 构建OutputStreamInfo,从app 申请的surface 获取wdith、height、fromat等信息 
binder::Status SessionConfigurationUtils::createSurfaceFromGbp(
        OutputStreamInfo& streamInfo, bool isStreamInfoValid,
        sp<Surface>& surface, const sp<IGraphicBufferProducer>& gbp,
        const String8 &logicalCameraId, const CameraMetadata &physicalCameraMetadata,
        const std::vector<int32_t> &sensorPixelModesUsed){
    .......
    // 其实前面的surface 中的gbp,new 一个c++ 层的Surface对象,C++ Surface 继承自ANativeWindow
    surface = new Surface(gbp, useAsync);
    ANativeWindow *anw = surface.get();

    int width, height, format;
    android_dataspace dataSpace;
    
    
    // 从ANativeWindow 中获取surface的width、height、format等,
    if ((err = anw->query(anw, NATIVE_WINDOW_WIDTH, &width)) != OK) {
        String8 msg = String8::format("Camera %s: Failed to query Surface width: %s (%d)",
                 logicalCameraId.string(), strerror(-err), err);
        ALOGE("%s: %s", __FUNCTION__, msg.string());
        return STATUS_ERROR(CameraService::ERROR_INVALID_OPERATION, msg.string());
    }
    if ((err = anw->query(anw, NATIVE_WINDOW_HEIGHT, &height)) != OK) {
        String8 msg = String8::format("Camera %s: Failed to query Surface height: %s (%d)",
                logicalCameraId.string(), strerror(-err), err);
        ALOGE("%s: %s", __FUNCTION__, msg.string());
        return STATUS_ERROR(CameraService::ERROR_INVALID_OPERATION, msg.string());
    }
    if ((err = anw->query(anw, NATIVE_WINDOW_FORMAT, &format)) != OK) {
        String8 msg = String8::format("Camera %s: Failed to query Surface format: %s (%d)",
                logicalCameraId.string(), strerror(-err), err);
        ALOGE("%s: %s", __FUNCTION__, msg.string());
        return STATUS_ERROR(CameraService::ERROR_INVALID_OPERATION, msg.string());
    }
    if ((err = anw->query(anw, NATIVE_WINDOW_DEFAULT_DATASPACE,
            reinterpret_cast<int*>(&dataSpace))) != OK) {
        String8 msg = String8::format("Camera %s: Failed to query Surface dataspace: %s (%d)",
                logicalCameraId.string(), strerror(-err), err);
        ALOGE("%s: %s", __FUNCTION__, msg.string());
        return STATUS_ERROR(CameraService::ERROR_INVALID_OPERATION, msg.string());
    }
    ......
    // isStreamInfoValid 传进来就是false,然后将上面的信息给到streaminfo
    if (!isStreamInfoValid) {
        streamInfo.width = width;
        streamInfo.height = height;
        streamInfo.format = format;
        streamInfo.dataSpace = dataSpace;
        streamInfo.consumerUsage = consumerUsage;
        streamInfo.sensorPixelModesUsed = overriddenSensorPixelModes;
        return binder::Status::ok();
    }
    ......
    }
    return binder::Status::ok();
}

frameworks/av/services/camera/libcameraservice/device3/Camera3Device.cpp

cppstatus_t 复制代码
        bool hasDeferredConsumer, uint32_t width, uint32_t height, int format,
        android_dataspace dataSpace, camera_stream_rotation_t rotation, int *id,
        const String8& physicalCameraId, const std::unordered_set<int32_t> &sensorPixelModesUsed,
        std::vector<int> *surfaceIds, int streamSetId, bool isShared, bool isMultiResolution,
        uint64_t consumerUsage) {
    ......
    sp<Camera3OutputStream> newStream;
    ......
    if (format == HAL_PIXEL_FORMAT_BLOB) {
     ......
    } else {
        // new 一个 Camera3OutputStream
        newStream = new Camera3OutputStream(mNextStreamId, consumers[0],
                width, height, format, dataSpace, rotation,
                mTimestampOffset, physicalCameraId, sensorPixelModesUsed, streamSetId,
                isMultiResolution);
    }

    size_t consumerCount = consumers.size();
    for (size_t i = 0; i < consumerCount; i++) {
        // 这里Camera3OutputStream::getSurfaceId 的实现就是返回0;
        int id = newStream->getSurfaceId(consumers[i]);
        if (id < 0) {
            SET_ERR_L("Invalid surface id");
            return BAD_VALUE;
        }
        // 这里将前面传进来的surface其中的Id都存到surfaceIds中 
        if (surfaceIds != nullptr) {
            surfaceIds->push_back(id);
        }
    }

    newStream->setStatusTracker(mStatusTracker);

    newStream->setBufferManager(mBufferManager);

    newStream->setImageDumpMask(mImageDumpMask);

    res = mOutputStreams.add(mNextStreamId, newStream);
    if (res < 0) {
        SET_ERR_L("Can't add new stream to set: %s (%d)", strerror(-res), res);
        return res;
    }

    mSessionStatsBuilder.addStream(mNextStreamId);

    *id = mNextStreamId++;
    mNeedConfig = true;

    // Continue captures if active at start
    if (wasActive) {
        ALOGV("%s: Restarting activity to reconfigure streams", __FUNCTION__);
        // Reuse current operating mode and session parameters for new stream config
        res = configureStreamsLocked(mOperatingMode, mSessionParams);
        if (res != OK) {
            CLOGE("Can't reconfigure device for new stream %d: %s (%d)",
                    mNextStreamId, strerror(-res), res);
            return res;
        }
        internalResumeLocked();
    }
    ALOGV("Camera %s: Created new stream", mId.string());
    return OK;
}

frameworks/av/services/camera/libcameraservice/device3/Camera3OutputStream.cpp

cpp// 复制代码
Camera3OutputStream::Camera3OutputStream(int id,
        sp<Surface> consumer,
        uint32_t width, uint32_t height, int format,
        android_dataspace dataSpace, camera_stream_rotation_t rotation,
        nsecs_t timestampOffset, const String8& physicalCameraId,
        const std::unordered_set<int32_t> &sensorPixelModesUsed,
        int setId, bool isMultiResolution) :
        Camera3IOStreamBase(id, CAMERA_STREAM_OUTPUT, width, height,
                            /*maxSize*/0, format, dataSpace, rotation,
                            physicalCameraId, sensorPixelModesUsed, setId, isMultiResolution),
        mConsumer(consumer),
        mTransform(0),
        mTraceFirstBuffer(true),
        mUseBufferManager(false),
        mTimestampOffset(timestampOffset),
        mConsumerUsage(0),
        mDropBuffers(false),
        mDequeueBufferLatency(kDequeueLatencyBinSize) {

    if (mConsumer == NULL) {
        ALOGE("%s: Consumer is NULL!", __FUNCTION__);
        mState = STATE_ERROR;
    }

    bool needsReleaseNotify = setId > CAMERA3_STREAM_SET_ID_INVALID;
    mBufferProducerListener = new BufferProducerListener(this, needsReleaseNotify);
}

时序图如下:

四、开始预览

1、从CameraManger -> openCamera() 获取CameraDeviceImpl ,然后调用CameraDeviceImpl的createCaptureSession的去创建会话,获取CameraCaptureSessionImpl对象,接着调用CameraCaptureSessionImpl的setRepeatingRequest去预览。

frameworks/base/core/java/android/hardware/camera2/impl/CameraCaptureSessionImpl.java

cpp 复制代码
  public int setRepeatingRequest(CaptureRequest request, CaptureCallback callback,
            Handler handler) throws CameraAccessException {
        checkRepeatingRequest(request);

        synchronized (mDeviceImpl.mInterfaceLock) {
            checkNotClosed();

            handler = checkHandler(handler, callback);

            if (DEBUG) {
                Log.v(TAG, mIdString + "setRepeatingRequest - request " + request + ", callback " +
                        callback + " handler" + " " + handler);
            }

            return addPendingSequence(mDeviceImpl.setRepeatingRequest(request,
                    createCaptureCallbackProxy(handler, callback), mDeviceExecutor));
        }
    }

frameworks/base/core/java/android/hardware/camera2/impl/CameraDeviceImpl.java

cpp 复制代码
            Executor executor) throws CameraAccessException {
        List<CaptureRequest> requestList = new ArrayList<CaptureRequest>();
        requestList.add(request);
        return submitCaptureRequest(requestList, callback, executor, /*streaming*/true);
    }
cpp 复制代码
 private int submitCaptureRequest(List<CaptureRequest> requestList, CaptureCallback callback,
            Executor executor, boolean repeating) throws CameraAccessException {

            ......
            // 如果打开过了就先停掉,然后再打开
            if (repeating) {
                stopRepeating();
            }

            SubmitInfo requestInfo;

            CaptureRequest[] requestArray = requestList.toArray(new CaptureRequest[requestList.size()]);
            // Convert Surface to streamIdx and surfaceIdx
            for (CaptureRequest request : requestArray) {
                request.convertSurfaceToStreamId(mConfiguredOutputs);
            }
            
            // 调到CameraDeviceClient的submitRequestList
            requestInfo = mRemoteDevice.submitRequestList(requestArray, repeating);
            if (DEBUG) {
                Log.v(TAG, "last frame number " + requestInfo.getLastFrameNumber());
            }

             ......

            if (repeating) {
                if (mRepeatingRequestId != REQUEST_ID_NONE) {
                    checkEarlyTriggerSequenceCompleteLocked(mRepeatingRequestId,
                            requestInfo.getLastFrameNumber(),
                            mRepeatingRequestTypes);
                }
                mRepeatingRequestId = requestInfo.getRequestId();
                mRepeatingRequestTypes = getRequestTypes(requestArray);
            } else {
                mRequestLastFrameNumbersList.add(
                    new RequestLastFrameNumbersHolder(requestList, requestInfo));
            }

            if (mIdle) {
                mDeviceExecutor.execute(mCallOnActive);
            }
            mIdle = false;

            return requestInfo.getRequestId();
        }
    }

frameworks/av/services/camera/libcameraservice/api2/CameraDeviceClient.cpp

cpp 复制代码
binder::Status CameraDeviceClient::submitRequestList(
        const std::vector<hardware::camera2::CaptureRequest>& requests,
        bool streaming,
        /*out*/
        hardware::camera2::utils::SubmitInfo *submitInfo) {
      
    ......
    List<const CameraDeviceBase::PhysicalCameraSettingsList> metadataRequestList;
    std::list<const SurfaceMap> surfaceMapList;
    submitInfo->mRequestId = mRequestIdCounter;
    uint32_t loopCounter = 0;

    // 
    for (auto&& request: requests) {
        if (request.mIsReprocess) {
            if (!mInputStream.configured) {
                ALOGE("%s: Camera %s: no input stream is configured.", __FUNCTION__,
                        mCameraIdStr.string());
                return STATUS_ERROR_FMT(CameraService::ERROR_ILLEGAL_ARGUMENT,
                        "No input configured for camera %s but request is for reprocessing",
                        mCameraIdStr.string());
            } else if (streaming) {
                ALOGE("%s: Camera %s: streaming reprocess requests not supported.", __FUNCTION__,
                        mCameraIdStr.string());
                return STATUS_ERROR(CameraService::ERROR_ILLEGAL_ARGUMENT,
                        "Repeating reprocess requests not supported");
            } else if (request.mPhysicalCameraSettings.size() > 1) {
                ALOGE("%s: Camera %s: reprocess requests not supported for "
                        "multiple physical cameras.", __FUNCTION__,
                        mCameraIdStr.string());
                return STATUS_ERROR(CameraService::ERROR_ILLEGAL_ARGUMENT,
                        "Reprocess requests not supported for multiple cameras");
            }
        }

        if (request.mPhysicalCameraSettings.empty()) {
            ALOGE("%s: Camera %s: request doesn't contain any settings.", __FUNCTION__,
                    mCameraIdStr.string());
            return STATUS_ERROR(CameraService::ERROR_ILLEGAL_ARGUMENT,
                    "Request doesn't contain any settings");
        }

        //The first capture settings should always match the logical camera id
        String8 logicalId(request.mPhysicalCameraSettings.begin()->id.c_str());
        if (mDevice->getId() != logicalId) {
            ALOGE("%s: Camera %s: Invalid camera request settings.", __FUNCTION__,
                    mCameraIdStr.string());
            return STATUS_ERROR(CameraService::ERROR_ILLEGAL_ARGUMENT,
                    "Invalid camera request settings");
        }

        if (request.mSurfaceList.isEmpty() && request.mStreamIdxList.size() == 0) {
            ALOGE("%s: Camera %s: Requests must have at least one surface target. "
                    "Rejecting request.", __FUNCTION__, mCameraIdStr.string());
            return STATUS_ERROR(CameraService::ERROR_ILLEGAL_ARGUMENT,
                    "Request has no output targets");
        }

        /**
         * Write in the output stream IDs and map from stream ID to surface ID
         * which we calculate from the capture request's list of surface target
         */
        SurfaceMap surfaceMap;
        Vector<int32_t> outputStreamIds;
        std::vector<std::string> requestedPhysicalIds;
        if (request.mSurfaceList.size() > 0) {
            for (const sp<Surface>& surface : request.mSurfaceList) {
                if (surface == 0) continue;

                int32_t streamId;
                sp<IGraphicBufferProducer> gbp = surface->getIGraphicBufferProducer();
                res = insertGbpLocked(gbp, &surfaceMap, &outputStreamIds, &streamId);
                if (!res.isOk()) {
                    return res;
                }

                ssize_t index = mConfiguredOutputs.indexOfKey(streamId);
                if (index >= 0) {
                    String8 requestedPhysicalId(
                            mConfiguredOutputs.valueAt(index).getPhysicalCameraId());
                    requestedPhysicalIds.push_back(requestedPhysicalId.string());
                } else {
                    ALOGW("%s: Output stream Id not found among configured outputs!", __FUNCTION__);
                }
            }
        } else {
            for (size_t i = 0; i < request.mStreamIdxList.size(); i++) {
                int streamId = request.mStreamIdxList.itemAt(i);
                int surfaceIdx = request.mSurfaceIdxList.itemAt(i);

                ssize_t index = mConfiguredOutputs.indexOfKey(streamId);
                if (index < 0) {
                    ALOGE("%s: Camera %s: Tried to submit a request with a surface that"
                            " we have not called createStream on: stream %d",
                            __FUNCTION__, mCameraIdStr.string(), streamId);
                    return STATUS_ERROR(CameraService::ERROR_ILLEGAL_ARGUMENT,
                            "Request targets Surface that is not part of current capture session");
                }

                const auto& gbps = mConfiguredOutputs.valueAt(index).getGraphicBufferProducers();
                if ((size_t)surfaceIdx >= gbps.size()) {
                    ALOGE("%s: Camera %s: Tried to submit a request with a surface that"
                            " we have not called createStream on: stream %d, surfaceIdx %d",
                            __FUNCTION__, mCameraIdStr.string(), streamId, surfaceIdx);
                    return STATUS_ERROR(CameraService::ERROR_ILLEGAL_ARGUMENT,
                            "Request targets Surface has invalid surface index");
                }

                res = insertGbpLocked(gbps[surfaceIdx], &surfaceMap, &outputStreamIds, nullptr);
                if (!res.isOk()) {
                    return res;
                }

                String8 requestedPhysicalId(
                        mConfiguredOutputs.valueAt(index).getPhysicalCameraId());
                requestedPhysicalIds.push_back(requestedPhysicalId.string());
            }
        }

        CameraDeviceBase::PhysicalCameraSettingsList physicalSettingsList;
        for (const auto& it : request.mPhysicalCameraSettings) {
            if (it.settings.isEmpty()) {
                ALOGE("%s: Camera %s: Sent empty metadata packet. Rejecting request.",
                        __FUNCTION__, mCameraIdStr.string());
                return STATUS_ERROR(CameraService::ERROR_ILLEGAL_ARGUMENT,
                        "Request settings are empty");
            }

            // Check whether the physical / logical stream has settings
            // consistent with the sensor pixel mode(s) it was configured with.
            // mCameraIdToStreamSet will only have ids that are high resolution
            const auto streamIdSetIt = mHighResolutionCameraIdToStreamIdSet.find(it.id);
            if (streamIdSetIt != mHighResolutionCameraIdToStreamIdSet.end()) {
                std::list<int> streamIdsUsedInRequest = getIntersection(streamIdSetIt->second,
                        outputStreamIds);
                if (!request.mIsReprocess &&
                        !isSensorPixelModeConsistent(streamIdsUsedInRequest, it.settings)) {
                     ALOGE("%s: Camera %s: Request settings CONTROL_SENSOR_PIXEL_MODE not "
                            "consistent with configured streams. Rejecting request.",
                            __FUNCTION__, it.id.c_str());
                    return STATUS_ERROR(CameraService::ERROR_ILLEGAL_ARGUMENT,
                        "Request settings CONTROL_SENSOR_PIXEL_MODE are not consistent with "
                        "streams configured");
                }
            }

            String8 physicalId(it.id.c_str());
            bool hasTestPatternModePhysicalKey = std::find(mSupportedPhysicalRequestKeys.begin(),
                    mSupportedPhysicalRequestKeys.end(), ANDROID_SENSOR_TEST_PATTERN_MODE) !=
                    mSupportedPhysicalRequestKeys.end();
            bool hasTestPatternDataPhysicalKey = std::find(mSupportedPhysicalRequestKeys.begin(),
                    mSupportedPhysicalRequestKeys.end(), ANDROID_SENSOR_TEST_PATTERN_DATA) !=
                    mSupportedPhysicalRequestKeys.end();
            if (physicalId != mDevice->getId()) {
                auto found = std::find(requestedPhysicalIds.begin(), requestedPhysicalIds.end(),
                        it.id);
                if (found == requestedPhysicalIds.end()) {
                    ALOGE("%s: Camera %s: Physical camera id: %s not part of attached outputs.",
                            __FUNCTION__, mCameraIdStr.string(), physicalId.string());
                    return STATUS_ERROR(CameraService::ERROR_ILLEGAL_ARGUMENT,
                            "Invalid physical camera id");
                }

                if (!mSupportedPhysicalRequestKeys.empty()) {
                    // Filter out any unsupported physical request keys.
                    CameraMetadata filteredParams(mSupportedPhysicalRequestKeys.size());
                    camera_metadata_t *meta = const_cast<camera_metadata_t *>(
                            filteredParams.getAndLock());
                    set_camera_metadata_vendor_id(meta, mDevice->getVendorTagId());
                    filteredParams.unlock(meta);

                    for (const auto& keyIt : mSupportedPhysicalRequestKeys) {
                        camera_metadata_ro_entry entry = it.settings.find(keyIt);
                        if (entry.count > 0) {
                            filteredParams.update(entry);
                        }
                    }

                    physicalSettingsList.push_back({it.id, filteredParams,
                            hasTestPatternModePhysicalKey, hasTestPatternDataPhysicalKey});
                }
            } else {
                physicalSettingsList.push_back({it.id, it.settings});
            }
        }

        if (!enforceRequestPermissions(physicalSettingsList.begin()->metadata)) {
            // Callee logs
            return STATUS_ERROR(CameraService::ERROR_PERMISSION_DENIED,
                    "Caller does not have permission to change restricted controls");
        }

        physicalSettingsList.begin()->metadata.update(ANDROID_REQUEST_OUTPUT_STREAMS,
                &outputStreamIds[0], outputStreamIds.size());

        if (request.mIsReprocess) {
            physicalSettingsList.begin()->metadata.update(ANDROID_REQUEST_INPUT_STREAMS,
                    &mInputStream.id, 1);
        }

        physicalSettingsList.begin()->metadata.update(ANDROID_REQUEST_ID,
                &(submitInfo->mRequestId), /*size*/1);
        loopCounter++; // loopCounter starts from 1
        ALOGV("%s: Camera %s: Creating request with ID %d (%d of %zu)",
                __FUNCTION__, mCameraIdStr.string(), submitInfo->mRequestId,
                loopCounter, requests.size());

        metadataRequestList.push_back(physicalSettingsList);
        surfaceMapList.push_back(surfaceMap);
    }
    mRequestIdCounter++;

    if (streaming) {
        // 预览走这里
        err = mDevice->setStreamingRequestList(metadataRequestList, surfaceMapList,
                &(submitInfo->mLastFrameNumber));
    } else {
        // 拍照走这里
        err = mDevice->captureList(metadataRequestList, surfaceMapList,
                &(submitInfo->mLastFrameNumber));
    }

    ALOGV("%s: Camera %s: End of function", __FUNCTION__, mCameraIdStr.string());
    return res;
}
cpp 复制代码
status_t Camera3Device::setStreamingRequestList(
        const List<const PhysicalCameraSettingsList> &requestsList,
        const std::list<const SurfaceMap> &surfaceMaps, int64_t *lastFrameNumber) {
    ATRACE_CALL();
    
    return submitRequestsHelper(requestsList, surfaceMaps, /*repeating*/true, lastFrameNumber);
}


status_t Camera3Device::submitRequestsHelper(
        const List<const PhysicalCameraSettingsList> &requests,
        const std::list<const SurfaceMap> &surfaceMaps,
        bool repeating,
        /*out*/
        int64_t *lastFrameNumber) {
    // 预览走这里
    if (repeating) {
        res = mRequestThread->setRepeatingRequests(requestList, lastFrameNumber);
    } else {
        res = mRequestThread->queueRequestList(requestList, lastFrameNumber);
    }

    ......
    return res;
}


status_t Camera3Device::RequestThread::setRepeatingRequests(
        const RequestList &requests,
        /*out*/
        int64_t *lastFrameNumber) {
    ATRACE_CALL();
    Mutex::Autolock l(mRequestLock);
    if (lastFrameNumber != NULL) {
        *lastFrameNumber = mRepeatingLastFrameNumber;
    }
    mRepeatingRequests.clear();
    mFirstRepeating = true;
    // 将requests请求都给到mRepeatingRequests里面
    mRepeatingRequests.insert(mRepeatingRequests.begin(),
            requests.begin(), requests.end());

    unpauseForNewRequests();

    mRepeatingLastFrameNumber = hardware::camera2::ICameraDeviceUser::NO_IN_FLIGHT_REPEATING_FRAMES;
    return OK;
}

void Camera3Device::RequestThread::unpauseForNewRequests() {
    ATRACE_CALL();
    // With work to do, mark thread as unpaused.
    // If paused by request (setPaused), don't resume, to avoid
    // extra signaling/waiting overhead to waitUntilPaused
    // 主要是这里的signal然后,唤醒一个线程去进行发送请求给hal
    mRequestSignal.signal();
    Mutex::Autolock p(mPauseLock);
    if (!mDoPause) {
        ALOGV("%s: RequestThread: Going active", __FUNCTION__);
        if (mPaused) {
            sp<StatusTracker> statusTracker = mStatusTracker.promote();
            if (statusTracker != 0) {
                statusTracker->markComponentActive(mStatusId);
            }
        }
        mPaused = false;
    }
}

mRequestSignal.signal()要去唤醒之前跑起来阻塞住的线程

下发请求

Threadloop 开始工作

等待的线程被打断开始获取buffer,通过processBatchCaptureRequests下发请求到camera hal

通过onResultAvailable 通知app camera data准备好了

五、关闭预览

停止预览app会调用接口CameraDeviceImpl的stopRepeating接口,通过trace可以看到如下调用

CameraDeviceImpl::stopRepeating -> CameraDeviceClient::cancelRequest -> Camera3Device::clearStreamingRequest -> Camera3Device::RequestThread::clearRepeatingRequests->

Camera3Device::RequestThread::clearRepeatingRequestsLocked

会在clearRepeatingRequestsLocked中clear mRepeatingRequests List, 然后mRepeatingRequests为0后

Camera3Device 的threadloop 就会走到

然后停止获取buffer向camera hal发送

六、关闭相机

app调用close接口关闭camera,调用时序如上,在cameraserver中会退出相关的threadloop线程,在camera hal会正式做camera ioctrl close 销毁ais client 对应的camera context

相关推荐
火柴就是我18 分钟前
首次使用Android Studio时,http proxy,gradle问题解决
android
limingade30 分钟前
手机打电话时电脑坐席同时收听对方说话并插入IVR预录声音片段
android·智能手机·电脑·蓝牙电话·电脑打电话
浩浩测试一下1 小时前
计算机网络中的DHCP是什么呀? 详情解答
android·网络·计算机网络·安全·web安全·网络安全·安全架构
青春给了狗2 小时前
Android 14 系统统一修改app启动时图标大小和圆角
android
pengyu3 小时前
【Flutter 状态管理 - 柒】 | InheritedWidget:藏在组件树里的"魔法"✨
android·flutter·dart
居然是阿宋4 小时前
Kotlin高阶函数 vs Lambda表达式:关键区别与协作关系
android·开发语言·kotlin
凉、介5 小时前
PCI 总线学习笔记(五)
android·linux·笔记·学习·pcie·pci
小贾要学习6 小时前
【C++】继承----下篇
android·java·c++
投笔丶从戎7 小时前
Kotlin Multiplatform--01:项目结构基础
android·开发语言·kotlin