基于Android P版本分析
基于Google官方文件分析
output image queues
相机核心操作模型:
上图,主要是体现了Camera2 capture流程,其实也可以映射到setRepeatingRequest流程。capture和setRepeatingRequest的区别是一次请求和循环请求的区别;
- capture:捕获图片,一次抓拍请求;
- setRepeatingRequest:开启预览请求,一次Request,RequestThread中循环执行该request;
通过createCaptureRequest().build()方法创建了CaptureRequest实例,然后通过setRepeatingRequest将创建好的Request传递到HAL层中。
一个新的 CaptureRequest 会被放入一个被称作 Pending Request Queue 的队列中等待被执行,当In-Flight Capture Queue 队列空闲的时候就会从 Pending Request Queue 获取若干个待处理的 CaptureRequest,并且根据每一个 CaptureRequest 的配置进行 Capture 操作。最后我们从不同尺寸的 Surface 中获取图片数据并且还会得到一个包含了很多与本次拍照相关的信息的 CaptureResult,流程结束;
HAL层做了2件事:
- onCaptureComplete;
- output image queues;
onCaptureComplete
将捕获的元数据的结果封装成CaptureResult,传回给上层;
onCaptureComplete()回调方法的调用已经在processCaptureResult流程分析中已经说明了,因为Buffer数据不会通过onCaptureComplete()方法上报给上层,所以就需要分析一下output image queues;
output image queues
将图像数据的 1 到 N 个缓冲区分别进入对应的Surface中。因为在setRepeatingRequest中有可能存在多个Surface,所以可能会有对应多个缓冲区;
这个图很好的说明了HAL层和CameraServer之间的工作流程;
- 添加CaptureRequest;
- HAL设备必须按顺序处理请求,并未每个请求生成输出结果元数据,以及一个或多个输出图像缓冲区;
- 将获取到的元数据配置和状态封装到CaptureResult中。将图像缓冲区filled到output stream中;
- 置空output stream,等待下次使用;
Surface->Buffer绑定到HAL层
prepareHalRequests & Camera3OutputStream
在RequestThread流程分析的时候,我们分析过prepareHalRequests()函数:
ini
status_t Camera3Device::RequestThread::prepareHalRequests() {
ATRACE_CALL();
bool batchedRequest = mNextRequests[0].captureRequest->mBatchSize > 1;
for (size_t i = 0; i < mNextRequests.size(); i++) {
auto& nextRequest = mNextRequests.editItemAt(i);
// 获取NextRequest结构体中的所有属性
sp<CaptureRequest> captureRequest = nextRequest.captureRequest;
camera3_capture_request_t* halRequest = &nextRequest.halRequest;
// outputBuffers是一个保存stream buffer的容器
Vector<camera3_stream_buffer_t>* outputBuffers = &nextRequest.outputBuffers;
// Prepare a request to HAL
// halRequest的构建 -- start
halRequest->frame_number = captureRequest->mResultExtras.frameNumber;
........................
// 构造一个camera3_stream_buffer的对象,并将其放入outputBuffers的容器中
// 插在序号0的后面,插入个数为captureRequest->mOutputStreams.size(),一般为1个
outputBuffers->insertAt(camera3_stream_buffer_t(), 0,
captureRequest->mOutputStreams.size());
// halRequest->output_buffers 指向的是上一步插入buffer的地址,应该是camera3_stream_buffer对象的地址
halRequest->output_buffers = outputBuffers->array();
std::set<String8> requestedPhysicalCameras;
sp<Camera3Device> parent = mParent.promote();
if (parent == NULL) {
// Should not happen, and nowhere to send errors to, so just log it
CLOGE("RequestThread: Parent is gone");
return INVALID_OPERATION;
}
nsecs_t waitDuration = kBaseGetBufferWait + parent->getExpectedInFlightDuration();
SurfaceMap uniqueSurfaceIdMap;
for (size_t j = 0; j < captureRequest->mOutputStreams.size(); j++) {
sp<Camera3OutputStreamInterface> outputStream =
captureRequest->mOutputStreams.editItemAt(j);
int streamId = outputStream->getId();
........................
if (mUseHalBufManager) {
if (outputStream->isAbandoned()) {
ALOGV("%s: stream %d is abandoned, skipping request", __FUNCTION__, streamId);
return TIMED_OUT;
}
// HAL will request buffer through requestStreamBuffer API
camera3_stream_buffer_t& buffer = outputBuffers->editItemAt(j);
buffer.stream = outputStream->asHalStream();
buffer.buffer = nullptr;
buffer.status = CAMERA3_BUFFER_STATUS_OK;
buffer.acquire_fence = -1;
buffer.release_fence = -1;
} else {
// outputStream的类型为Camera3OutputStream
res = outputStream->getBuffer(&outputBuffers->editItemAt(j),
waitDuration,
captureRequest->mOutputSurfaces[streamId]);
if (res != OK) {
// Can't get output buffer from gralloc queue - this could be due to
// abandoned queue or other consumer misbehavior, so not a fatal
// error
ALOGV("RequestThread: Can't get output buffer, skipping request:"
" %s (%d)", strerror(-res), res);
return TIMED_OUT;
}
}
........................
return OK;
}
分析该方法前,我们一定要清楚,该方法中完成了一个非常重要的目的,就是output buffer的准备,HAL所有的工作都是围绕输出的Buffer来操作的,所以看完这个方法,我们必须搞清楚,output buffer是如何准备的,准备到哪里去了。整个方法就一个for循环,对入参的每个Request进行处理,接下来的逻辑都是在给成员变量halRequest的子变量进行赋值,通过outputBuffers->insertAt()函数 创建对应的camera3_stream_buffer_t()对象 ,然后通过halRequest->output_buffers保存camera3_stream_buffer_t()对象的地址;
buffer准备完成之后,调用了outputStream->getBuffer():
/frameworks/av/services/camera/libcameraservice/device3/Camera3Stream.cpp
scss
status_t Camera3Stream::getBuffer(camera3_stream_buffer *buffer,
const std::vector<size_t>& surface_ids) {
ATRACE_CALL();
Mutex::Autolock l(mLock);
status_t res = OK;
// This function should be only called when the stream is configured already.
if (mState != STATE_CONFIGURED) {
ALOGE("%s: Stream %d: Can't get buffers if stream is not in CONFIGURED state %d",
__FUNCTION__, mId, mState);
return INVALID_OPERATION;
}
// Wait for new buffer returned back if we are running into the limit.
if (getHandoutOutputBufferCountLocked() == camera3_stream::max_buffers) {
ALOGV("%s: Already dequeued max output buffers (%d), wait for next returned one.",
__FUNCTION__, camera3_stream::max_buffers);
nsecs_t waitStart = systemTime(SYSTEM_TIME_MONOTONIC);
res = mOutputBufferReturnedSignal.waitRelative(mLock, kWaitForBufferDuration);
nsecs_t waitEnd = systemTime(SYSTEM_TIME_MONOTONIC);
mBufferLimitLatency.add(waitStart, waitEnd);
if (res != OK) {
if (res == TIMED_OUT) {
ALOGE("%s: wait for output buffer return timed out after %lldms (max_buffers %d)",
__FUNCTION__, kWaitForBufferDuration / 1000000LL,
camera3_stream::max_buffers);
}
return res;
}
}
res = getBufferLocked(buffer, surface_ids);
if (res == OK) {
fireBufferListenersLocked(*buffer, /*acquired*/true, /*output*/true);
if (buffer->buffer) {
Mutex::Autolock l(mOutstandingBuffersLock);
mOutstandingBuffers.push_back(*buffer->buffer);
}
}
return res;
}
在该函数中调用了getBufferLocked()函数:
/frameworks/av/services/camera/libcameraservice/device3/Camera3OutputStream.cpp
arduino
status_t Camera3OutputStream::getBufferLocked(camera3_stream_buffer *buffer,
const std::vector<size_t>&) {
ATRACE_CALL();
ANativeWindowBuffer* anb;
int fenceFd = -1;
status_t res;
// 调用dequeueBuffer从而关联Surface到ANativeWindowBuffer类型的anb中
res = getBufferLockedCommon(&anb, &fenceFd);
if (res != OK) {
return res;
}
/**
* FenceFD now owned by HAL except in case of error,
* in which case we reassign it to acquire_fence
*/
// 通过上面得到的anb,构造buffer
handoutBufferLocked(*buffer, &(anb->handle), /*acquireFence*/fenceFd,
/*releaseFence*/-1, CAMERA3_BUFFER_STATUS_OK, /*output*/true);
return OK;
}
首先先调用了getBufferLockedCommon()函数:
scss
status_t Camera3OutputStream::getBufferLockedCommon(ANativeWindowBuffer** anb, int* fenceFd) {
ATRACE_CALL();
status_t res;
if ((res = getBufferPreconditionCheckLocked()) != OK) {
return res;
}
bool gotBufferFromManager = false;
if (mUseBufferManager) {
sp<GraphicBuffer> gb;
// 使用Camera3BufferManager管理buffer.并从中拿到GraphicBuffer.这里作为gb
res = mBufferManager->getBufferForStream(getId(), getStreamSetId(), &gb, fenceFd);
if (res == OK) {
// Attach this buffer to the bufferQueue: the buffer will be in dequeue state after a
// successful return.
// 把gb转成ANativeWindowBuffer.并添加到consumer的activeBuffer中
*anb = gb.get();
res = mConsumer->attachBuffer(*anb);
if (res != OK) {
ALOGE("%s: Stream %d: Can't attach the output buffer to this surface: %s (%d)",
__FUNCTION__, mId, strerror(-res), res);
return res;
}
// 表明从Camera3BufferManager获得来的buf
gotBufferFromManager = true;
ALOGV("Stream %d: Attached new buffer", getId());
} else if (res == ALREADY_EXISTS) {
// Have sufficient free buffers already attached, can just
// dequeue from buffer queue
ALOGV("Stream %d: Reusing attached buffer", getId());
gotBufferFromManager = false;
} else if (res != OK) {
ALOGE("%s: Stream %d: Can't get next output buffer from buffer manager: %s (%d)",
__FUNCTION__, mId, strerror(-res), res);
return res;
}
}
if (!gotBufferFromManager) {
/**
* Release the lock briefly to avoid deadlock for below scenario:
* Thread 1: StreamingProcessor::startStream -> Camera3Stream::isConfiguring().
* This thread acquired StreamingProcessor lock and try to lock Camera3Stream lock.
* Thread 2: Camera3Stream::returnBuffer->StreamingProcessor::onFrameAvailable().
* This thread acquired Camera3Stream lock and bufferQueue lock, and try to lock
* StreamingProcessor lock.
* Thread 3: Camera3Stream::getBuffer(). This thread acquired Camera3Stream lock
* and try to lock bufferQueue lock.
* Then there is circular locking dependency.
*/
sp<ANativeWindow> currentConsumer = mConsumer;
mLock.unlock();
nsecs_t dequeueStart = systemTime(SYSTEM_TIME_MONOTONIC);
// 使用 consumer 自己的空闲 buffer
res = currentConsumer->dequeueBuffer(currentConsumer.get(), anb, fenceFd);
nsecs_t dequeueEnd = systemTime(SYSTEM_TIME_MONOTONIC);
mDequeueBufferLatency.add(dequeueStart, dequeueEnd);
mLock.lock();
if (res != OK) {
ALOGE("%s: Stream %d: Can't dequeue next output buffer: %s (%d)",
__FUNCTION__, mId, strerror(-res), res);
// Only transition to STATE_ABANDONED from STATE_CONFIGURED. (If it is STATE_PREPARING,
// let prepareNextBuffer handle the error.)
if (res == NO_INIT && mState == STATE_CONFIGURED) {
mState = STATE_ABANDONED;
}
return res;
}
}
if (res == OK) {
std::vector<sp<GraphicBuffer>> removedBuffers;
res = mConsumer->getAndFlushRemovedBuffers(&removedBuffers);
if (res == OK) {
onBuffersRemovedLocked(removedBuffers);
if (mUseBufferManager && removedBuffers.size() > 0) {
mBufferManager->onBuffersRemoved(getId(), getStreamSetId(), removedBuffers.size());
}
}
}
return res;
}
该函数将mConsumer和anb关联起来,实际上就是将Surface和ANativeWindowBuffer关联起来。
调用currentConsumer->dequeueBuffer获取空闲的buffer绑定到anb对象上;
getBufferLockedCommon函数分析完成之后,紧接着分析handoutBufferLocked()函数:
/frameworks/av/services/camera/libcameraservice/device3/Camera3IOStreamBase.cpp
arduino
void Camera3IOStreamBase::handoutBufferLocked(camera3_stream_buffer &buffer,
buffer_handle_t *handle,
int acquireFence,
int releaseFence,
camera3_buffer_status_t status,
bool output) {
/**
* Note that all fences are now owned by HAL.
*/
// Handing out a raw pointer to this object. Increment internal refcount.
incStrong(this);
buffer.stream = this;
buffer.buffer = handle;
buffer.acquire_fence = acquireFence;
buffer.release_fence = releaseFence;
buffer.status = status;
// Inform tracker about becoming busy
if (mHandoutTotalBufferCount == 0 && mState != STATE_IN_CONFIG &&
mState != STATE_IN_RECONFIG && mState != STATE_PREPARING) {
/**
* Avoid a spurious IDLE->ACTIVE->IDLE transition when using buffers
* before/after register_stream_buffers during initial configuration
* or re-configuration, or during prepare pre-allocation
*/
sp<StatusTracker> statusTracker = mStatusTracker.promote();
if (statusTracker != 0) {
statusTracker->markComponentActive(mStatusId);
}
}
mHandoutTotalBufferCount++;
if (output) {
mHandoutOutputBufferCount++;
}
}
初始化buffer信息,因为该buffer指向的就是halRequest->output_buffers;
至此,一步一步的完成halRequest的构建,输出Buffer就是成员变量outputBuffers了,它的准备就是调用outputStream->getBuffer(&outputBuffers->editItemAt(j), captureRequest->mOutputSurfaces[j])实现的;
这样,halRequest->output_buffers关联到了我们的预览窗口;
processCaptureRequest
prepareHalRequests()函数执行完成之后,就紧接着执行sendRequestsBatch()函数:
我们看一下调用流程:
- Camera3Device::RequestThread::sendRequestsBatch();
- Camera3Device::HalInterface::processBatchCaptureRequests();
- Camera3Device::HalInterface::processCaptureRequest();
- 最终会调用到HAL3层的processCaptureRequest()函数:
/hardware/qcom/camera/QCamera2/HAL3/QCamera3HWI.cpp
ini
int QCamera3HardwareInterface::processCaptureRequest(
camera3_capture_request_t *request,
List<InternalRequest> &internallyRequestedStreams)
{
...
for (size_t i = 0; i < request->num_output_buffers; i++) {
const camera3_stream_buffer_t& output = request->output_buffers[i];
QCamera3Channel *channel = (QCamera3Channel *)output.stream->priv;
...
rc = channel->request(output.buffer, frameNumber,
NULL, mParameters, indexUsed);
...
}
在该函数中调用了channel->request函数将buffer注册到HAL3中;
/hardware/qcom/camera/msm8998/QCamera2/HAL3/QCamera3Channel.cpp
scss
int32_t QCamera3ProcessingChannel::request(buffer_handle_t *buffer,
uint32_t frameNumber,
camera3_stream_buffer_t* pInputBuffer,
metadata_buffer_t* metadata,
int &indexUsed,
__unused bool internalRequest = false,
__unused bool meteringOnly = false)
{
int32_t rc = NO_ERROR;
int index;
if (NULL == buffer || NULL == metadata) {
LOGE("Invalid buffer/metadata in channel request");
return BAD_VALUE;
}
if (pInputBuffer) {
//need to send to reprocessing
LOGD("Got a request with input buffer, output streamType = %d", mStreamType);
reprocess_config_t reproc_cfg;
cam_dimension_t dim;
memset(&reproc_cfg, 0, sizeof(reprocess_config_t));
memset(&dim, 0, sizeof(dim));
setReprocConfig(reproc_cfg, pInputBuffer, metadata, mStreamFormat, dim);
startPostProc(reproc_cfg);
qcamera_fwk_input_pp_data_t *src_frame = NULL;
src_frame = (qcamera_fwk_input_pp_data_t *)calloc(1,
sizeof(qcamera_fwk_input_pp_data_t));
if (src_frame == NULL) {
LOGE("No memory for src frame");
return NO_MEMORY;
}
rc = setFwkInputPPData(src_frame, pInputBuffer, &reproc_cfg, metadata, buffer, frameNumber);
if (NO_ERROR != rc) {
LOGE("Error %d while setting framework input PP data", rc);
free(src_frame);
return rc;
}
LOGH("Post-process started");
m_postprocessor.processData(src_frame);
} else {
index = mMemory.getMatchBufIndex((void*)buffer);
if(index < 0) {
rc = registerBuffer(buffer, mIsType);
if (NO_ERROR != rc) {
LOGE("On-the-fly buffer registration failed %d",
rc);
return rc;
}
index = mMemory.getMatchBufIndex((void*)buffer);
if (index < 0) {
LOGE("Could not find object among registered buffers");
return DEAD_OBJECT;
}
}
rc = mMemory.markFrameNumber(index, frameNumber);
if(rc != NO_ERROR) {
LOGE("Error marking frame number:%d for index %d", frameNumber,
index);
return rc;
}
if (m_bIsActive) {
rc = mStreams[0]->bufDone(index);
if(rc != NO_ERROR) {
LOGE("Failed to Q new buffer to stream");
mMemory.markFrameNumber(index, -1);
return rc;
}
}
indexUsed = index;
}
return rc;
}
其中主要的逻辑为:rc = registerBuffer(buffer, mIsType):
ini
int32_t QCamera3ProcessingChannel::registerBuffer(buffer_handle_t *buffer,
cam_is_type_t isType)
{
ATRACE_CAMSCOPE_CALL(CAMSCOPE_HAL3_PROC_CH_REG_BUF);
int rc = 0;
mIsType = isType;
cam_stream_type_t streamType;
if ((uint32_t)mMemory.getCnt() > (mNumBufs - 1)) {
LOGE("Trying to register more buffers than initially requested");
return BAD_VALUE;
}
if (0 == m_numStreams) {
rc = initialize(mIsType);
if (rc != NO_ERROR) {
LOGE("Couldn't initialize camera stream %d", rc);
return rc;
}
}
streamType = mStreams[0]->getMyType();
rc = mMemory.registerBuffer(buffer, streamType);
if (ALREADY_EXISTS == rc) {
return NO_ERROR;
} else if (NO_ERROR != rc) {
LOGE("Buffer %p couldn't be registered %d", buffer, rc);
return rc;
}
return rc;
}
我们关注一下mMemory.registerBuffer逻辑:
首先看一下mMemory和mGrallocMem的定义:
/hardware/qcom/camera/msm8998/QCamera2/HAL3/QCamera3Channel.h
ini
..................
QCamera3StreamMem mMemory; //output buffer allocated by fwk
..................
private:
camera3_stream_t *mStream;
QCamera3GrallocMemory mGrallocMem;
};
arduino
int QCamera3StreamMem::registerBuffer(buffer_handle_t *buffer,
cam_stream_type_t type)
{
Mutex::Autolock lock(mLock);
return mGrallocMem.registerBuffer(buffer, type);
}
mGrallocMem.registerBuffer:
ini
int QCamera3GrallocMemory::registerBuffer(buffer_handle_t *buffer,
__unused cam_stream_type_t type)
{
status_t ret = NO_ERROR;
struct ion_fd_data ion_info_fd;
int32_t colorSpace = ITU_R_601_FR;
int32_t idx = -1;
LOGD("E");
memset(&ion_info_fd, 0, sizeof(ion_info_fd));
if (0 <= getMatchBufIndex((void *) buffer)) {
LOGL("Buffer already registered");
return ALREADY_EXISTS;
}
Mutex::Autolock lock(mLock);
if (mBufferCount >= (MM_CAMERA_MAX_NUM_FRAMES - 1 - mStartIdx)) {
LOGE("Number of buffers %d greater than what's supported %d",
mBufferCount, MM_CAMERA_MAX_NUM_FRAMES - mStartIdx);
return BAD_INDEX;
}
idx = getFreeIndexLocked();
if (0 > idx) {
LOGE("No available memory slots");
return BAD_INDEX;
}
mBufferHandle[idx] = buffer;
mPrivateHandle[idx] = (struct private_handle_t *)(*mBufferHandle[idx]);
setMetaData(mPrivateHandle[idx], UPDATE_COLOR_SPACE, &colorSpace);
if (main_ion_fd < 0) {
LOGE("failed: could not open ion device");
ret = NO_MEMORY;
goto end;
} else {
ion_info_fd.fd = mPrivateHandle[idx]->fd;
if (ioctl(main_ion_fd,
ION_IOC_IMPORT, &ion_info_fd) < 0) {
LOGE("ION import failed\n");
ret = NO_MEMORY;
goto end;
}
}
LOGD("idx = %d, fd = %d, size = %d, offset = %d",
idx, mPrivateHandle[idx]->fd,
mPrivateHandle[idx]->size,
mPrivateHandle[idx]->offset);
mMemInfo[idx].fd = mPrivateHandle[idx]->fd;
mMemInfo[idx].size =
( /* FIXME: Should update ION interface */ size_t)
mPrivateHandle[idx]->size;
mMemInfo[idx].handle = ion_info_fd.handle;
mBufferCount++;
end:
LOGD("X ");
return ret;
}
至此,窗口的Buffer(Surface中的buffer)就注册到了HAL3中,由mMemInfo保存buffer信息;
mMemInfo定义:
ini
QCamera3Memory::QCamera3Memory()
{
mBufferCount = 0;
for (int i = 0; i < MM_CAMERA_MAX_NUM_FRAMES; i++) {
mMemInfo[i].fd = -1;
mMemInfo[i].handle = 0;
mMemInfo[i].size = 0;
mCurrentFrameNumbers[i] = -1;
}
main_ion_fd = open("/dev/ion", O_RDONLY);
}
/hardware/qcom/camera/msm8998/QCamera2/stack/common/mm_camera_interface.h
arduino
#define MM_CAMERA_MAX_NUM_FRAMES CAM_MAX_NUM_BUFS_PER_STREAM
/hardware/qcom/camera/msm8998/QCamera2/stack/common/cam_types.h
arduino
#define CAM_MAX_NUM_BUFS_PER_STREAM 64
/hardware/qcom/camera/msm8998/QCamera2/HAL3/QCamera3Mem.h
ini
struct QCamera3MemInfo mMemInfo[MM_CAMERA_MAX_NUM_FRAMES];
HAL数据分发
buffer在HAL绑定之后,HAL层会调用start_channel函数,该函数在一系列的调用,会调用到mm_stream_init_bufs函数中:
/hardware/qcom/camera/msm8998/QCamera2/stack/mm-camera-interface/src/mm_camera_stream.c
rust
/*===========================================================================
* FUNCTION : mm_stream_init_bufs
*
* DESCRIPTION: initialize stream buffers needed. This function will request
* buffers needed from upper layer through the mem ops table passed
* during configuration stage.
*
* PARAMETERS :
* @my_obj : stream object
*
* RETURN : int32_t type of status
* 0 -- success
* -1 -- failure
*==========================================================================*/
int32_t mm_stream_init_bufs(mm_stream_t * my_obj)
{
int32_t i, rc = 0;
uint8_t *reg_flags = NULL;
LOGD("E, my_handle = 0x%x, fd = %d, state = %d",
my_obj->my_hdl, my_obj->fd, my_obj->state);
/* deinit buf if it's not NULL*/
if (NULL != my_obj->buf) {
mm_stream_deinit_bufs(my_obj);
}
if (!my_obj->is_res_shared) {
rc = my_obj->mem_vtbl.get_bufs(&my_obj->frame_offset,
&my_obj->total_buf_cnt, ®_flags, &my_obj->buf,
&my_obj->map_ops, my_obj->mem_vtbl.user_data);
if (rc == 0) {
for (i = 0; i < my_obj->total_buf_cnt; i++) {
my_obj->buf_status[i].initial_reg_flag = reg_flags[i];
}
}
} else {
rc = mm_camera_muxer_get_stream_bufs(my_obj);
}
if (0 != rc) {
LOGE("Error get buf, rc = %d\n", rc);
return rc;
}
LOGH("Buffer count = %d buf id = %d",my_obj->buf_num, my_obj->buf_idx);
for (i = my_obj->buf_idx; i < (my_obj->buf_idx + my_obj->buf_num); i++) {
my_obj->buf[i].stream_id = my_obj->my_hdl;
my_obj->buf[i].stream_type = my_obj->stream_info->stream_type;
if (my_obj->buf[i].buf_type == CAM_STREAM_BUF_TYPE_USERPTR) {
my_obj->buf[i].user_buf.bufs_used =
(int8_t)my_obj->stream_info->user_buf_info.frame_buf_cnt;
if (reg_flags) {
my_obj->buf[i].user_buf.buf_in_use = reg_flags[i];
}
}
}
if (my_obj->stream_info->streaming_mode == CAM_STREAMING_MODE_BATCH) {
my_obj->plane_buf = my_obj->buf[0].user_buf.plane_buf;
if (my_obj->plane_buf != NULL) {
my_obj->plane_buf_num =
my_obj->buf_num *
my_obj->stream_info->user_buf_info.frame_buf_cnt;
for (i = 0; i < my_obj->plane_buf_num; i++) {
my_obj->plane_buf[i].stream_id = my_obj->my_hdl;
my_obj->plane_buf[i].stream_type = my_obj->stream_info->stream_type;
}
}
my_obj->cur_bufs_staged = 0;
my_obj->cur_buf_idx = -1;
}
free(reg_flags);
reg_flags = NULL;
/* update in stream info about number of stream buffers */
my_obj->stream_info->num_bufs = my_obj->total_buf_cnt;
return rc;
}
该函数中调用了my_obj->mem_vtbl.get_bufs函数:
/hardware/qcom/camera/msm8998/QCamera2/HAL3/QCamera3Stream.cpp
arduino
int32_t QCamera3Stream::get_bufs(
cam_frame_len_offset_t *offset,
uint8_t *num_bufs,
uint8_t **initial_reg_flag,
mm_camera_buf_def_t **bufs,
mm_camera_map_unmap_ops_tbl_t *ops_tbl,
void *user_data)
{
int32_t rc = NO_ERROR;
QCamera3Stream *stream = reinterpret_cast<QCamera3Stream *>(user_data);
if (!stream) {
LOGE("getBufs invalid stream pointer");
return NO_MEMORY;
}
rc = stream->getBufs(offset, num_bufs, initial_reg_flag, bufs, ops_tbl);
if (NO_ERROR != rc) {
LOGE("stream->getBufs failed");
return NO_MEMORY;
}
if (stream->mBatchSize) {
//Allocate batch buffers if mBatchSize is non-zero. All the output
//arguments correspond to batch containers and not image buffers
rc = stream->getBatchBufs(num_bufs, initial_reg_flag,
bufs, ops_tbl);
}
return rc;
}
其中调用了stream->getBufs函数,stream类型为QCamera3Stream:
ini
/*===========================================================================
* FUNCTION : getBufs
*
* DESCRIPTION: allocate stream buffers
*
* PARAMETERS :
* @offset : offset info of stream buffers
* @num_bufs : number of buffers allocated
* @initial_reg_flag: flag to indicate if buffer needs to be registered
* at kernel initially
* @bufs : output of allocated buffers
* @ops_tbl : ptr to buf mapping/unmapping ops
*
* RETURN : int32_t type of status
* NO_ERROR -- success
* none-zero failure code
*==========================================================================*/
int32_t QCamera3Stream::getBufs(cam_frame_len_offset_t *offset,
uint8_t *num_bufs,
uint8_t **initial_reg_flag,
mm_camera_buf_def_t **bufs,
mm_camera_map_unmap_ops_tbl_t *ops_tbl)
{
int rc = NO_ERROR;
uint8_t *regFlags;
Mutex::Autolock lock(mLock);
if (!ops_tbl) {
LOGE("ops_tbl is NULL");
return INVALID_OPERATION;
}
mFrameLenOffset = *offset;
mMemOps = ops_tbl;
if (mStreamBufs != NULL) {
LOGE("Failed getBufs being called twice in a row without a putBufs call");
return INVALID_OPERATION;
}
// 冲channel中获取stream中的buffers,该函数中其实调用了allocateAll(len),申请了对应长度的内存空间
mStreamBufs = mChannel->getStreamBufs(mFrameLenOffset.frame_len);
if (!mStreamBufs) {
LOGE("Failed to allocate stream buffers");
return NO_MEMORY;
}
for (uint32_t i = 0; i < mNumBufs; i++) {
if (mStreamBufs->valid(i)) {
// 获取buf的大小,返回mMemInfo[index].size
ssize_t bufSize = mStreamBufs->getSize(i);
if (BAD_INDEX != bufSize) {
void* buffer = (mMapStreamBuffers ?
mStreamBufs->getPtr(i) : NULL);
// 将buffer映射到camera的Daemon进程中
rc = ops_tbl->map_ops(i, -1, mStreamBufs->getFd(i),
(size_t)bufSize, buffer,
CAM_MAPPING_BUF_TYPE_STREAM_BUF,
ops_tbl->userdata);
if (rc < 0) {
LOGE("map_stream_buf failed: %d", rc);
for (uint32_t j = 0; j < i; j++) {
if (mStreamBufs->valid(j)) {
ops_tbl->unmap_ops(j, -1,
CAM_MAPPING_BUF_TYPE_STREAM_BUF,
ops_tbl->userdata);
}
}
return INVALID_OPERATION;
}
} else {
LOGE("Failed to retrieve buffer size (bad index)");
return INVALID_OPERATION;
}
}
}
//regFlags array is allocated by us, but consumed and freed by mm-camera-interface
regFlags = (uint8_t *)malloc(sizeof(uint8_t) * mNumBufs);
if (!regFlags) {
LOGE("Out of memory");
for (uint32_t i = 0; i < mNumBufs; i++) {
if (mStreamBufs->valid(i)) {
ops_tbl->unmap_ops(i, -1, CAM_MAPPING_BUF_TYPE_STREAM_BUF,
ops_tbl->userdata);
}
}
return NO_MEMORY;
}
memset(regFlags, 0, sizeof(uint8_t) * mNumBufs);
// 分配内存
mBufDefs = (mm_camera_buf_def_t *)malloc(mNumBufs * sizeof(mm_camera_buf_def_t));
if (mBufDefs == NULL) {
LOGE("Failed to allocate mm_camera_buf_def_t %d", rc);
for (uint32_t i = 0; i < mNumBufs; i++) {
if (mStreamBufs->valid(i)) {
ops_tbl->unmap_ops(i, -1, CAM_MAPPING_BUF_TYPE_STREAM_BUF,
ops_tbl->userdata);
}
}
free(regFlags);
regFlags = NULL;
return INVALID_OPERATION;
}
memset(mBufDefs, 0, mNumBufs * sizeof(mm_camera_buf_def_t));
for (uint32_t i = 0; i < mNumBufs; i++) {
if (mStreamBufs->valid(i)) {
// 填充内存
mStreamBufs->getBufDef(mFrameLenOffset, mBufDefs[i], i, mMapStreamBuffers);
}
}
rc = mStreamBufs->getRegFlags(regFlags);
if (rc < 0) {
LOGE("getRegFlags failed %d", rc);
for (uint32_t i = 0; i < mNumBufs; i++) {
if (mStreamBufs->valid(i)) {
ops_tbl->unmap_ops(i, -1, CAM_MAPPING_BUF_TYPE_STREAM_BUF,
ops_tbl->userdata);
}
}
free(mBufDefs);
mBufDefs = NULL;
free(regFlags);
regFlags = NULL;
return INVALID_OPERATION;
}
*num_bufs = mNumBufs;
*initial_reg_flag = regFlags;
*bufs = mBufDefs;
return NO_ERROR;
}
通过mChannel->getStreamBufs函数为mStreamBufs分配了内存之后,就紧接着需要向mStreamBufs中填充数据:
ini
int32_t QCamera3Memory::getBufDef(const cam_frame_len_offset_t &offset,
mm_camera_buf_def_t &bufDef, uint32_t index, bool virtualAddr)
{
Mutex::Autolock lock(mLock);
if (!mBufferCount) {
LOGE("Memory not allocated");
return NO_INIT;
}
// 句柄
bufDef.fd = mMemInfo[index].fd;
bufDef.frame_len = mMemInfo[index].size;
bufDef.mem_info = (void *)this;
bufDef.buffer = virtualAddr ? getPtrLocked(index) : nullptr;
bufDef.planes_buf.num_planes = (int8_t)offset.num_planes;
bufDef.buf_idx = (uint8_t)index;
/* Plane 0 needs to be set separately. Set other planes in a loop */
bufDef.planes_buf.planes[0].length = offset.mp[0].len;
bufDef.planes_buf.planes[0].m.userptr = (long unsigned int)mMemInfo[index].fd;
bufDef.planes_buf.planes[0].data_offset = offset.mp[0].offset;
bufDef.planes_buf.planes[0].reserved[0] = 0;
for (int i = 1; i < bufDef.planes_buf.num_planes; i++) {
bufDef.planes_buf.planes[i].length = offset.mp[i].len;
bufDef.planes_buf.planes[i].m.userptr = (long unsigned int)mMemInfo[i].fd;
bufDef.planes_buf.planes[i].data_offset = offset.mp[i].offset;
bufDef.planes_buf.planes[i].reserved[0] =
bufDef.planes_buf.planes[i-1].reserved[0] +
bufDef.planes_buf.planes[i-1].length;
}
return NO_ERROR;
}
在对bufDef赋值的过程中,其中调用了getPtrLocked函数:
perl
/*===========================================================================
* FUNCTION : getPtrLocked
*
* DESCRIPTION: Return buffer pointer. Please note 'mLock' must be acquired
* before calling this method.
*
* PARAMETERS :
* @index : index of the buffer
*
* RETURN : buffer ptr
*==========================================================================*/
void *QCamera3GrallocMemory::getPtrLocked(uint32_t index)
{
if (MM_CAMERA_MAX_NUM_FRAMES <= index) {
LOGE("index %d out of bound [0, %d)",
index, MM_CAMERA_MAX_NUM_FRAMES);
return NULL;
}
if (index < mStartIdx) {
LOGE("buffer index %d less than starting index %d",
index, mStartIdx);
return NULL;
}
if (0 == mMemInfo[index].handle) {
LOGE("Buffer at %d not registered", index);
return NULL;
}
if (mPtr[index] == nullptr) {
void *vaddr = NULL;
vaddr = mmap(NULL,
mMemInfo[index].size,
PROT_READ | PROT_WRITE,
MAP_SHARED,
mMemInfo[index].fd, 0);
if (vaddr == MAP_FAILED) {
LOGE("mmap failed for buffer index %d, size %d: %s(%d)",
index, mMemInfo[index].size, strerror(errno), errno);
return NULL;
} else {
mPtr[index] = vaddr;
}
}
return mPtr[index];
}
在该函数中,调用mmap函数通过mMemInfo[index].fd映射内存地址,最后将映射得到的mPtr[index]返回;
QCamera3Memory::getBufDef函数执行完成之后,bufDef参数就赋值完成了,然后逐层追溯,追溯到mm_stream_init_bufs函数中,通过调用流程可知,my_obj->buf最终被赋值完成了;