基于Android P版本分析
CameraManager.openCamera()以及对应的HAL层add_channel完成之后,紧接着就是执行CameraDevice.createCaptureSession以及对应HAL层的操作流程;
当CameraDevice.createCaptureSession执行到framework时,会执行Camera3Device::configureStreamsLocked()函数:
ini
status_t Camera3Device::configureStreamsLocked(int operatingMode,
const CameraMetadata& sessionParams, bool notifyRequestThread) {
........................
camera3_stream_configuration config;
config.operation_mode = mOperatingMode;
config.num_streams = (mInputStream != NULL) + mOutputStreams.size();
Vector<camera3_stream_t*> streams;
streams.setCapacity(config.num_streams);
std::vector<uint32_t> bufferSizes(config.num_streams, 0);
if (mInputStream != NULL) {
camera3_stream_t *inputStream;
inputStream = mInputStream->startConfiguration();
if (inputStream == NULL) {
CLOGE("Can't start input stream configuration");
cancelStreamsConfigurationLocked();
return INVALID_OPERATION;
}
streams.add(inputStream);
}
for (size_t i = 0; i < mOutputStreams.size(); i++) {
// Don't configure bidi streams twice, nor add them twice to the list
if (mOutputStreams[i].get() ==
static_cast<Camera3StreamInterface*>(mInputStream.get())) {
config.num_streams--;
continue;
}
camera3_stream_t *outputStream;
outputStream = mOutputStreams.editValueAt(i)->startConfiguration();
if (outputStream == NULL) {
CLOGE("Can't start output stream configuration");
cancelStreamsConfigurationLocked();
return INVALID_OPERATION;
}
// 对所有的output类型的stream进行保存,在后续hal config的过程中需要将这些streams传入,保存hal层可以读取stream中的buffer并进行数据填充
streams.add(outputStream);
if (outputStream->format == HAL_PIXEL_FORMAT_BLOB &&
outputStream->data_space == HAL_DATASPACE_V0_JFIF) {
size_t k = i + ((mInputStream != nullptr) ? 1 : 0); // Input stream if present should
// always occupy the initial entry.
bufferSizes[k] = static_cast<uint32_t>(
getJpegBufferSize(outputStream->width, outputStream->height));
}
}
// 将mOutputStreams中的stream信息保存到camera3_stream_configuration变量config中
config.streams = streams.editArray();
// Do the HAL configuration; will potentially touch stream
// max_buffers, usage, priv fields.
const camera_metadata_t *sessionBuffer = sessionParams.getAndLock();
// 通过Interface实例将Stream信息交互到HAL层,进行HAL层的configStream
res = mInterface->configureStreams(sessionBuffer, config, bufferSizes);
sessionParams.unlock(sessionBuffer);
........................
return OK;
}
在该函数中,调用了mInterface->configureStreams函数,mInterface的定义在Camera3Device::initialize函数中赋值:
scss
status_t Camera3Device::initialize(sp<CameraProviderManager> manager, const String8& monitorTags) {
ATRACE_CALL();
Mutex::Autolock il(mInterfaceLock);
Mutex::Autolock l(mLock);
ALOGV("%s: Initializing HIDL device for camera %s", __FUNCTION__, mId.string());
if (mStatus != STATUS_UNINITIALIZED) {
CLOGE("Already initialized!");
return INVALID_OPERATION;
}
if (manager == nullptr) return INVALID_OPERATION;
sp<ICameraDeviceSession> session;
ATRACE_BEGIN("CameraHal::openSession");
status_t res = manager->openSession(mId.string(), this,
/*out*/ &session);
ATRACE_END();
if (res != OK) {
SET_ERR_L("Could not open camera session: %s (%d)", strerror(-res), res);
return res;
}
res = manager->getCameraCharacteristics(mId.string(), &mDeviceInfo);
if (res != OK) {
SET_ERR_L("Could not retrive camera characteristics: %s (%d)", strerror(-res), res);
session->close();
return res;
}
std::shared_ptr<RequestMetadataQueue> queue;
auto requestQueueRet = session->getCaptureRequestMetadataQueue(
[&queue](const auto& descriptor) {
queue = std::make_shared<RequestMetadataQueue>(descriptor);
if (!queue->isValid() || queue->availableToWrite() <= 0) {
ALOGE("HAL returns empty request metadata fmq, not use it");
queue = nullptr;
// don't use the queue onwards.
}
});
if (!requestQueueRet.isOk()) {
ALOGE("Transaction error when getting request metadata fmq: %s, not use it",
requestQueueRet.description().c_str());
return DEAD_OBJECT;
}
std::unique_ptr<ResultMetadataQueue>& resQueue = mResultMetadataQueue;
auto resultQueueRet = session->getCaptureResultMetadataQueue(
[&resQueue](const auto& descriptor) {
resQueue = std::make_unique<ResultMetadataQueue>(descriptor);
if (!resQueue->isValid() || resQueue->availableToWrite() <= 0) {
ALOGE("HAL returns empty result metadata fmq, not use it");
resQueue = nullptr;
// Don't use the resQueue onwards.
}
});
if (!resultQueueRet.isOk()) {
ALOGE("Transaction error when getting result metadata queue from camera session: %s",
resultQueueRet.description().c_str());
return DEAD_OBJECT;
}
IF_ALOGV() {
session->interfaceChain([](
::android::hardware::hidl_vec<::android::hardware::hidl_string> interfaceChain) {
ALOGV("Session interface chain:");
for (auto iface : interfaceChain) {
ALOGV(" %s", iface.c_str());
}
});
}
mInterface = new HalInterface(session, queue);
........................
}
在Camera3Device::initialize函数中,通过在openCamera过程中创建好的CameraDeviceSession以及对应的RequestMetadataQueue一同构建的HalInterface对象;
紧接着我们还是分析mInterface->configureStreams函数:
ini
const camera_metadata_t *sessionBuffer = sessionParams.getAndLock();
res = mInterface->configureStreams(sessionBuffer, config, bufferSizes);
sessionParams.unlock(sessionBuffer);
参数说明:
参数 | 说明 |
---|---|
sessionBuffer | 参数类型为CaptureRequest,在调用CameraDevice.createCaptureSession的时候,传入的为null |
config | 参数类型为camera3_stream_configuration,主要是用于保存所有的streams的信息,包括配置信息等,最终会将该参数传入到hal底层使用 |
bufferSizes | 参数类型为int数组,用于保存所有stream中buffer的size |
我们通过Camera3Device的initialize函数可知,mInterface的类型为HalInterface,所以我们看一下HALInterface的configureStream函数:
ini
status_t Camera3Device::HalInterface::configureStreams(const camera_metadata_t *sessionParams,
camera3_stream_configuration *config, const std::vector<uint32_t>& bufferSizes) {
ATRACE_NAME("CameraHal::configureStreams");
if (!valid()) return INVALID_OPERATION;
status_t res = OK;
// Convert stream config to HIDL
std::set<int> activeStreams;
device::V3_2::StreamConfiguration requestedConfiguration3_2;
device::V3_4::StreamConfiguration requestedConfiguration3_4;
requestedConfiguration3_2.streams.resize(config->num_streams);
requestedConfiguration3_4.streams.resize(config->num_streams);
for (size_t i = 0; i < config->num_streams; i++) {
device::V3_2::Stream &dst3_2 = requestedConfiguration3_2.streams[i];
device::V3_4::Stream &dst3_4 = requestedConfiguration3_4.streams[i];
camera3_stream_t *src = config->streams[i];
Camera3Stream* cam3stream = Camera3Stream::cast(src);
cam3stream->setBufferFreedListener(this);
int streamId = cam3stream->getId();
StreamType streamType;
switch (src->stream_type) {
case CAMERA3_STREAM_OUTPUT:
streamType = StreamType::OUTPUT;
break;
case CAMERA3_STREAM_INPUT:
streamType = StreamType::INPUT;
break;
default:
ALOGE("%s: Stream %d: Unsupported stream type %d",
__FUNCTION__, streamId, config->streams[i]->stream_type);
return BAD_VALUE;
}
dst3_2.id = streamId;
dst3_2.streamType = streamType;
dst3_2.width = src->width;
dst3_2.height = src->height;
dst3_2.format = mapToPixelFormat(src->format);
dst3_2.usage = mapToConsumerUsage(cam3stream->getUsage());
dst3_2.dataSpace = mapToHidlDataspace(src->data_space);
dst3_2.rotation = mapToStreamRotation((camera3_stream_rotation_t) src->rotation);
dst3_4.v3_2 = dst3_2;
dst3_4.bufferSize = bufferSizes[i];
if (src->physical_camera_id != nullptr) {
dst3_4.physicalCameraId = src->physical_camera_id;
}
activeStreams.insert(streamId);
// Create Buffer ID map if necessary
if (mBufferIdMaps.count(streamId) == 0) {
mBufferIdMaps.emplace(streamId, BufferIdMap{});
}
}
// remove BufferIdMap for deleted streams
for(auto it = mBufferIdMaps.begin(); it != mBufferIdMaps.end();) {
int streamId = it->first;
bool active = activeStreams.count(streamId) > 0;
if (!active) {
it = mBufferIdMaps.erase(it);
} else {
++it;
}
}
StreamConfigurationMode operationMode;
res = mapToStreamConfigurationMode(
(camera3_stream_configuration_mode_t) config->operation_mode,
/*out*/ &operationMode);
if (res != OK) {
return res;
}
requestedConfiguration3_2.operationMode = operationMode;
requestedConfiguration3_4.operationMode = operationMode;
requestedConfiguration3_4.sessionParams.setToExternal(
reinterpret_cast<uint8_t*>(const_cast<camera_metadata_t*>(sessionParams)),
get_camera_metadata_size(sessionParams));
// Invoke configureStreams
device::V3_3::HalStreamConfiguration finalConfiguration;
common::V1_0::Status status;
// See if we have v3.4 or v3.3 HAL
if (mHidlSession_3_4 != nullptr) {
........................
} else if (mHidlSession_3_3 != nullptr) {
........................
} else {
// We don't; use v3.2 call and construct a v3.3 HalStreamConfiguration
ALOGV("%s: v3.2 device found", __FUNCTION__);
HalStreamConfiguration finalConfiguration_3_2;
auto err = mHidlSession->configureStreams(requestedConfiguration3_2,
[&status, &finalConfiguration_3_2]
(common::V1_0::Status s, const HalStreamConfiguration& halConfiguration) {
finalConfiguration_3_2 = halConfiguration;
status = s;
});
if (!err.isOk()) {
ALOGE("%s: Transaction error: %s", __FUNCTION__, err.description().c_str());
return DEAD_OBJECT;
}
finalConfiguration.streams.resize(finalConfiguration_3_2.streams.size());
for (size_t i = 0; i < finalConfiguration_3_2.streams.size(); i++) {
finalConfiguration.streams[i].v3_2 = finalConfiguration_3_2.streams[i];
finalConfiguration.streams[i].overrideDataSpace =
requestedConfiguration3_2.streams[i].dataSpace;
}
}
if (status != common::V1_0::Status::OK ) {
return CameraProviderManager::mapToStatusT(status);
}
// And convert output stream configuration from HIDL
for (size_t i = 0; i < config->num_streams; i++) {
camera3_stream_t *dst = config->streams[i];
int streamId = Camera3Stream::cast(dst)->getId();
// Start scan at i, with the assumption that the stream order matches
size_t realIdx = i;
bool found = false;
for (size_t idx = 0; idx < finalConfiguration.streams.size(); idx++) {
if (finalConfiguration.streams[realIdx].v3_2.id == streamId) {
found = true;
break;
}
realIdx = (realIdx >= finalConfiguration.streams.size()) ? 0 : realIdx + 1;
}
if (!found) {
ALOGE("%s: Stream %d not found in stream configuration response from HAL",
__FUNCTION__, streamId);
return INVALID_OPERATION;
}
device::V3_3::HalStream &src = finalConfiguration.streams[realIdx];
Camera3Stream* dstStream = Camera3Stream::cast(dst);
dstStream->setFormatOverride(false);
dstStream->setDataSpaceOverride(false);
int overrideFormat = mapToFrameworkFormat(src.v3_2.overrideFormat);
android_dataspace overrideDataSpace = mapToFrameworkDataspace(src.overrideDataSpace);
if (dst->format != HAL_PIXEL_FORMAT_IMPLEMENTATION_DEFINED) {
if (dst->format != overrideFormat) {
ALOGE("%s: Stream %d: Format override not allowed for format 0x%x", __FUNCTION__,
streamId, dst->format);
}
if (dst->data_space != overrideDataSpace) {
ALOGE("%s: Stream %d: DataSpace override not allowed for format 0x%x", __FUNCTION__,
streamId, dst->format);
}
} else {
dstStream->setFormatOverride((dst->format != overrideFormat) ? true : false);
dstStream->setDataSpaceOverride((dst->data_space != overrideDataSpace) ? true : false);
// Override allowed with IMPLEMENTATION_DEFINED
dst->format = overrideFormat;
dst->data_space = overrideDataSpace;
}
if (dst->stream_type == CAMERA3_STREAM_INPUT) {
if (src.v3_2.producerUsage != 0) {
ALOGE("%s: Stream %d: INPUT streams must have 0 for producer usage",
__FUNCTION__, streamId);
return INVALID_OPERATION;
}
dstStream->setUsage(
mapConsumerToFrameworkUsage(src.v3_2.consumerUsage));
} else {
// OUTPUT
if (src.v3_2.consumerUsage != 0) {
ALOGE("%s: Stream %d: OUTPUT streams must have 0 for consumer usage",
__FUNCTION__, streamId);
return INVALID_OPERATION;
}
dstStream->setUsage(
mapProducerToFrameworkUsage(src.v3_2.producerUsage));
}
dst->max_buffers = src.v3_2.maxBuffers;
}
return res;
}
在该函数中,会将config对象封装成StreamConfiguration类型的对象,根据HAL小版本的区别,根据实际情况,创建对应版本的StreamConfiguration对象;
在该函数中,我们发现会根据mHidlSession的版本信息不同,选择不同的HAL3小版本,这里我们以HAL3.2为例;
在该函数中调用了mHidlSession->configureStreams函数,mHidlSession变量在HidlSession构造中赋值:
rust
Camera3Device::HalInterface::HalInterface(
sp<ICameraDeviceSession> &session,
std::shared_ptr<RequestMetadataQueue> queue) :
mHidlSession(session),
mRequestMetadataQueue(queue) {
// Check with hardware service manager if we can downcast these interfaces
// Somewhat expensive, so cache the results at startup
auto castResult_3_4 = device::V3_4::ICameraDeviceSession::castFrom(mHidlSession);
if (castResult_3_4.isOk()) {
mHidlSession_3_4 = castResult_3_4;
}
auto castResult_3_3 = device::V3_3::ICameraDeviceSession::castFrom(mHidlSession);
if (castResult_3_3.isOk()) {
mHidlSession_3_3 = castResult_3_3;
}
}
由于在Camera3Device的initialize函数中传入了session和queue:
ini
mInterface = new HalInterface(session, queue);
session为CameraDeviceSession,queue为RequestMetadataQueue;
所以mHidlSession就是session代理;
所以mHidlSession->configureStreams调用的就是CameraDeviceSession中的函数;
scss
Return<void> CameraDeviceSession::configureStreams(
const StreamConfiguration& requestedConfiguration,
ICameraDeviceSession::configureStreams_cb _hidl_cb) {
// 和openCamera过程中在HAL层中的initStatus逻辑一致
Status status = initStatus();
HalStreamConfiguration outStreams;
........................
camera3_stream_configuration_t stream_list{};
hidl_vec<camera3_stream_t*> streams;
if (!preProcessConfigurationLocked(requestedConfiguration, &stream_list, &streams)) {
_hidl_cb(Status::INTERNAL_ERROR, outStreams);
return Void();
}
ATRACE_BEGIN("camera3->configure_streams");
status_t ret = mDevice->ops->configure_streams(mDevice, &stream_list);
ATRACE_END();
........................
_hidl_cb(status, outStreams);
return Void();
}
在该函数中,核心逻辑:mDevice->ops->configure_streams
还是类同于Camera open HAL层逻辑,看一个mDevice类型以及对应的函数指针:
c
typedef struct camera3_device_ops {
int (*initialize)(const struct camera3_device *,
const camera3_callback_ops_t *callback_ops);
int (*configure_streams)(const struct camera3_device *,
camera3_stream_configuration_t *stream_list);
int (*register_stream_buffers)(const struct camera3_device *,
const camera3_stream_buffer_set_t *buffer_set);
const camera_metadata_t* (*construct_default_request_settings)(
const struct camera3_device *,
int type);
int (*process_capture_request)(const struct camera3_device *,
camera3_capture_request_t *request);
void (*get_metadata_vendor_tag_ops)(const struct camera3_device*,
vendor_tag_query_ops_t* ops);
void (*dump)(const struct camera3_device *, int fd);
int (*flush)(const struct camera3_device *);
/* reserved for future use */
void *reserved[8];
} camera3_device_ops_t;
typedef struct camera3_device {
hw_device_t common;
camera3_device_ops_t *ops;
void *priv;
} camera3_device_t;
configure_streams是一个函数指针,其最终的实现在QCamera3HardwareInterface中实现:
ini
int QCamera3HardwareInterface::configure_streams(
const struct camera3_device *device,
camera3_stream_configuration_t *stream_list)
{
LOGD("E");
QCamera3HardwareInterface *hw =
reinterpret_cast<QCamera3HardwareInterface *>(device->priv);
if (!hw) {
LOGE("NULL camera device");
return -ENODEV;
}
int rc = hw->configureStreams(stream_list);
LOGD("X");
return rc;
}
int QCamera3HardwareInterface::configureStreams(
camera3_stream_configuration_t *streamList)
{
ATRACE_CAMSCOPE_CALL(CAMSCOPE_HAL3_CFG_STRMS);
int rc = 0;
// Acquire perfLock before configure streams
mPerfLockMgr.acquirePerfLock(PERF_LOCK_START_PREVIEW);
rc = configureStreamsPerfLocked(streamList);
mPerfLockMgr.releasePerfLock(PERF_LOCK_START_PREVIEW);
return rc;
}
该函数在完成initialize函数之后,在调用process_capture_request函数之前被调用,主要用于重设当前正在运行的Pipeline以及设置新的输入输出流,其中它会将stream_list中的新的数据流替换之前配置的数据流;
在调用该函数之前必须确保新的request下发并且当前request的动作已经完成,否则会引起无法预测的错误;一旦HAL调用了该函数,则必须在内部配置好满足当前数据流配置的帧率,确保这个流程的运行的顺畅性;
在QCamera3HardwareInterface::configureStreams函数中调用了configureStreamsPerfLocked(streamList)函数,这个函数主要是构造channel派生出来的衍生channel,例如metadatachannel、yuvchannel、supportchannel等的一些衍生channel,有些channel是QCamera3ProcessingChannel的衍生,这些channel都定义在QCamera3Channel.cpp中:
ini
int QCamera3HardwareInterface::configureStreamsPerfLocked(
camera3_stream_configuration_t *streamList)
{
...
for (size_t i = 0; i < streamList->num_streams; i++) {
camera3_stream_t *newStream = streamList->streams[i];
if (!stream_exists && newStream->stream_type != CAMERA3_STREAM_INPUT) {
//new stream
stream_info_t* stream_info;
stream_info = (stream_info_t* )malloc(sizeof(stream_info_t));
if (!stream_info) {
LOGE("Could not allocate stream info");
rc = -ENOMEM;
pthread_mutex_unlock(&mMutex);
return rc;
}
stream_info->stream = newStream;
stream_info->status = VALID;
stream_info->channel = NULL;
mStreamInfo.push_back(stream_info);
}
}
mMetadataChannel = new QCamera3MetadataChannel(mCameraHandle->camera_handle,
mChannelHandle, mCameraHandle->ops, captureResultCb,
setBufferErrorStatus, &padding_info, metadataFeatureMask, this);
rc = mMetadataChannel->initialize(IS_TYPE_NONE);
...
for (size_t i = 0; i < streamList->num_streams; i++) {
camera3_stream_t *newStream = streamList->streams[i];
...
switch (newStream->format) {
case HAL_PIXEL_FORMAT_IMPLEMENTATION_DEFINED:
...
channel = new QCamera3RegularChannel(mCameraHandle->camera_handle,
mChannelHandle, mCameraHandle->ops, captureResultCb,
setBufferErrorStatus, &gCamCapability[mCameraId]->padding_info,
this,
newStream,
(cam_stream_type_t)
mStreamConfigInfo.type[mStreamConfigInfo.num_streams],
mStreamConfigInfo.postprocess_mask[mStreamConfigInfo.num_streams],
mMetadataChannel,
bufferCount);
newStream->max_buffers = channel->getNumBuffers();
newStream->priv = channel;
break;
}
...
}
该函数中会根据传入的streamList,遍历创建对应的HAL层channel以及获取对应HAL层的stream;
在各种Channel创建的时候,在构造函数中都传入了mChannelHandle对象,这个对象其实就是在Camera open的时候创建的一个总的channel,其余各种类型的channel都是由此衍生出来的;
我们以预览为例,创建了2条派生的channel,QCamera3RegularChannel 和QCamera3MetadataChannel ,与之对应我们同时还会得到2条stream:CAM_STREAM_TYPE_METADATA 和CAM_STREAM_TYPE_PREVIEW;
现在的channel状态是创建完成的状态,并没有进行初始化操作,初始化操作会在startpreview(setRepeatingRequest的时候)触发完成并启动channel;
在这其中,在初始化QCamera3ProcessingChannel的时候,初始化了QCamera3ProcessingChannel的成员变量m_postprocessor:
scss
QCamera3ProcessingChannel::QCamera3ProcessingChannel(uint32_t cam_handle,
uint32_t channel_handle,
mm_camera_ops_t *cam_ops,
channel_cb_routine cb_routine,
channel_cb_buffer_err cb_buffer_err,
cam_padding_info_t *paddingInfo,
void *userData,
camera3_stream_t *stream,
cam_stream_type_t stream_type,
cam_feature_mask_t postprocess_mask,
QCamera3Channel *metadataChannel,
uint32_t numBuffers) :
QCamera3Channel(cam_handle, channel_handle, cam_ops, cb_routine,
cb_buffer_err, paddingInfo, postprocess_mask, userData, numBuffers),
m_postprocessor(this),
mFrameCount(0),
mLastFrameCount(0),
mLastFpsTime(0),
mMemory(numBuffers),
mCamera3Stream(stream),
mNumBufs(CAM_MAX_NUM_BUFS_PER_STREAM),
mStreamType(stream_type),
mPostProcStarted(false),
mReprocessType(REPROCESS_TYPE_NONE),
mInputBufferConfig(false),
m_pMetaChannel(metadataChannel),
mMetaFrame(NULL),
mOfflineMemory(0),
mOfflineMetaMemory(numBuffers + (MAX_REPROCESS_PIPELINE_STAGES - 1))
{
char prop[PROPERTY_VALUE_MAX];
property_get("persist.debug.sf.showfps", prop, "0");
mDebugFPS = (uint8_t) atoi(prop);
int32_t rc = m_postprocessor.init(&mMemory);
if (rc != 0) {
LOGE("Init Postprocessor failed");
}
}
将本身赋值给自己,然后在函数体中执行了m_postprocessor.init(&mMemory)函数,m_postprocessor变量的类型为QCamera3PostProcessor,在QCamera3Channel.h中定义:
arduino
QCamera3PostProcessor m_postprocessor; // post processor
而在QCamera3PostProcessor::init操作中,会启动postprocessor中的一个线程:
ini
int32_t QCamera3PostProcessor::init(QCamera3StreamMem *memory)
{
ATRACE_CAMSCOPE_CALL(CAMSCOPE_HAL3_PPROC_INIT);
mOutputMem = memory;
m_dataProcTh.launch(dataProcessRoutine, this);
return NO_ERROR;
}
线程体为dataProcessRoutine,主要用于channel处理数据。也就是说QCamera3Channel持有一个处理数据的线程;
至此,应用层对应的createCaptureSession在HAL层的操作就执行完成了;
在HAL层,其实主要就是做了channel以及对应stream的创建和配置,保证framework层和hal的数据通路畅通;