Android14 Camera框架中Jpeg流buffer大小的计算

背景描述

Android13中,相机框架包含对AIDL Camera HAL的支持,在Android13或更高版本中添加的相机功能只能通过AIDL Camera HAL接口使用。

对于Android应用层来说,使用API34即以后版本的Camera应用程序通过Camera AIDL Interface访问到HAL层。在将HAL层从HIDL相机接口迁移到AIDL相机接口时,发现AIDL HAL Jpeg buffer带下是框架确定的。

接下来,先来看下HIDL HAL和AIDL HAL两者这块Gralloc Buffer(框架下来那块output buffer)是如何来获取的,

Demo HIDL HAL 输出buffer

以Google Demo为例子,HIDL HAL中output buffer是这样映射到camera hal的:

cpp 复制代码
int V4L2Wrapper::DequeueRequest(std::shared_ptr<CaptureRequest>* request)
{
    ...
    
    v4l2_buffer buffer;
    memset(&buffer, 0, sizeof(buffer));
    buffer.type = format_->type();
    buffer.memory = V4L2_MEMORY_USERPTR;
    int res = IoctlLocked(VIDIOC_DQBUF, &buffer);
    if (res) {
    }
    ...

    arc::GrallocFrameBuffer output_frame(*stream_buffer->buffer,
            stream_buffer->stream->width, stream_buffer->stream->height,
            fourcc, buffer.length, stream_buffer->stream->usage);
    res = output_frame.Map();
    ...
}

//GrallocFrameBuffer类进行地址映射
GrallocFrameBuffer::GrallocFrameBuffer(buffer_handle_t buffer, uint32_t width,
        uint32_t height, uint32_t fourcc, uint32_t device_buffer_length,
        uint32_t stream_usage)
    : buffer_(buffer),
      is_mapped_(false),
      device_buffer_length_(device_buffer_length),  /*这里接收到外部传过来的buffer大小*/
      stream_usage_(stream_usage), {
    ...
}

int GrallocFrameBuffer::Map() {
    ...
    switch (fourcc_) {
        ...
        case V4L2_PIX_FMT_JPEG:
            //这里调用gralloc mapper,映射地址
            ret = gralloc_module_->lock(gralloc_module_, buffer_, stream_usage_,
                        0, 0, device_buffer_length_, 1, &addr);
        break;
        ...
    }
    ...
}

Demo AIDL HAL输出buffer

aidl hal buffer的大小在configure stream阶段确认,框架下发流配置中携带着对应buffer大小。request阶段完成"lock"操作。

cpp 复制代码
status_t EmulatedRequestProcessor::LockSensorBuffer(const EmulatedStream& stream,
                buffer_handle_t buffer, int32_t width, uint32_t height,
                SensorBuffer* sensor_buffer /*out*/) {
    ...
    if ((isYUV_420_888) || (isP010)) {
        ...
    } else {
        uint32_t buffer_size = 0, stride = 0;
        auto ret = GetBufferSizeAndStride(stream, buffer, &buffer_size, &stride);
        if (ret != OK) {
            ALOGE("%s: Unsupported pixel format: 0x%x", __FUNCTION__,
                stream.override_format);
            return BAD_VALUE;
        }

        if (stream.override_format == HAL_PIXEL_FORMAT_BLOB) {
            //这里调用lock(),进行地址映射
            sensor_buffer->plane.img.img =
                static_cast<uint8_t*>(importer_->lock(buffer, usage, buffer_size));
        } else {
            ...
        }
    }
    ...
}

status_t EmulatedRequestProcessor::GetBufferSizeAndStride(const EmulatedStream& stream,
    buffer_handle_t buffer, uint32_t* size/*out*/, uint32_t* stride/*out*/) {
    
    if (size == nullptr) {
        return BAD_VALUE;
    }

    switch (stream.override_format) {
        ...
        case HAL_PIXEL_FORMAT_BLOB:
            if (stream.override_data_space == HAL_DATASPACE_V0_JFIF) {
                *size = stream.buffer_size;   //这里的stream是configure stream阶段设置的
                *stride = *size;
            } else {
                return BAD_VALUE;
            }
        break;
        ...
    }

    return OK;
}

Camera AIDL中JPEG bufferSize

request jpeg buffer大小是框架下发request到HAL层之前确定好的,有明确的计算规则:

jpegBufferSize = scaleFactor * (maxJpegBufferSize - kMinJpegBufferSize) + kMinJpegBufferSize

参数说明:

  • scaleFactor 是缩放因子
  • maxJpegBufferSize 是相机本身支持的最大Jpeg数据大小
  • kMinJpegBufferSize 是框架定义的最小Jpeg数据大小

kMinJpegBufferSize = 256 * 1024 + blobHeader 这个是框架定的最小jpeg数据大小

blobHeader是框架定义的Jpeg Blob头部, 框架接口中定义的数据结构,有明确的大小。

cpp 复制代码
//框架中最小jpeg blob

//blob header定义

scaleFactor缩放因子

scaleFactor =(jpegImage.width * jpegImage.height) / (chosenMaxJpegResolution.width * chosenMaxJpegResolution.height)

参数说明:

  • jpegImage 是request请求中的拍照流图像
  • chosenMaxJpegResolution 是相机(HAL层)能输出的最大Jpeg图像分辨率。根据是否支持max sensor pixel mode有两种情况。

chosenMaxJpegResolution相机输出的最大Jpeg图像分辨率

相机(HAL层)最大能输出多大分辨率的Jpeg图像,是由硬件决定的。

  1. 如果相机支持max sensor pixel mode输出,并且请求的jpeg分辨率超出defaultMaxJpegResolution,那么chosenMaxJpegResolution是相机特征中最大分辨率(tag: "availableStreamConfigurationsMaximumResolution")中最大jpeg分辩率。

  2. 如果相机支持default sensor pixel mode,那么chosenMaxJpegResolution是相机特征中默认模式(tag: "availableStreamConfigurations")最大jpeg分辨率。

maxJpegBufferSize相机输出的最大Jpeg数据大小

相机输出的Jpeg图像多大,不仅和Sensor传感器有关,和编码模块也有关系。对于相同的一张图像输入不同的编码模块编码输出能力不同。

相机(HAL层)最大能输出多大的Jpeg图像,和输入图像的最大分辨率有关,

  1. 如果相机是max sensor pixel mode输出,那么maxJpegBufferSize即为uhrMaxJpegBufferSize,

uhrMaxJpegBufferSize = (uhrMaxJpegResolution.width * uhrMaxJpegResolution.height) / (defaultMaxJpegResolution.width * defaultMaxJpegResolution.height) * defaultMaxJpegBufferSize;

参数说明:

  • uhrMaxJpegResolution 是相机max sensor pixel mode下能输出的Jpeg最大分辩率,见上节有提到。
  • defaultMaxJpegResolution 是相机default sensor pixel mode下输出的Jpeg最大分辨率,见上节。
  • defaultMaxJpegBufferSize 是相机default sensor pixel mode下个输出的Jpeg最大大小, 间接体现了编码压缩能力。
  1. 如果相机是default sensor pixel mode输出,那么maxJpegBufferSize即defaultMaxJpegBufferSize,

defaultMaxJpegBufferSize是相机特征直接上报的,tag "android.jpeg.maxSize"

Camera框架中JPEG bufferSize逻辑

CameraBlob定义

cpp 复制代码
//hardware/interfaces/camera/device/aidl/android/hardware/camera/device/CameraBlob.aidl
package android.hardware.camera.device;

import android.hardware.camera.device.CameraBlobId;

@VintfStability
parcelable CameraBlob {
    CameraBlobId blobId;
    
    int blobSizeBytes;
}

最小Jpeg bufferSize

cpp 复制代码
statis const ssize_t kMinJpegBufferSize =
        256 * 1024 + sizeof(aidl::android::hardware::camera::device::CameraBlob);

计算Jpeg bufferSize

cpp 复制代码
ssize_t Camera3Device::getJpegBufferSize(const CameraMetadata &info, uint32_t width, uint32_t height) const {

    //获取defaultMaxJpegResolution
    //Get max jpeg size (area-wise) for default sensor pixel mode
    camera3::Size maxDefaultJpegResolution = 
        SessionConfigurationUtils::getMaxJpegResolution(info,
            /*supportsUltraHighResolutionCapture*/false);

    //获取uhrMaxJpegResolution
    //Get max jpeg size (area-wise) for max resolution sensor pixel mode.
    camera3::Size uhrMaxJpegResolution = 
        SessionConfigurationUtils::getMaxJpegResolution(info,
            /*isUltraHighResolution*/true);

    if (maxDefaultJpegResolution.width == 0) {
        ALOGE("%s: Camera %s: Can't find valid available jpeg sizes in static metadata!",
                __FUNCTION__, mId.c_str());
        return BAD_VALUE;
    }

    //确定sensor以何种模式输出
    bool useMaxSensorPixelModeThreshold = false;
    if (uhrMaxJpegResolution.width != 0 &&
       width * height > maxDefaultJpegResolution.width * maxDefaultJpegResolution.height) {
        //Use the ultra high res max jpeg size and max jpeg buffer size.
        useMaxSensorPixelModeThreshold = true;
    }

    //获取defaultJpegBufferSize
    //Get max jpeg buffer size
    ssize_t maxJpegBufferSize = 0;
    camera_metadata_ro_entry jpegBufMaxSize = info.find(ANDROID_JPEG_MAX_SIZE);
    if (jpegBufMaxSize.count == 0) {
        ALOGE("%s: Camera %s: Can't find maximum JPEG size in static metadata!", 
            __FUNCTION__, mId.c_str());
        return BAD_VALUE;
    }
    maxJpegBufferSize = jpegBufMaxSize.data.i32[0];

    //确定最大Jpeg分辨率和最大Jpeg大小
    camera3::Size chosenMaxJpegResolution = maxDefaultJpegResolution;
    if (useMaxSensorPixelModeThreshold) {
        maxJpegBufferSize = SessionConfigurationUtils::getUHRMaxJpegBufferSize(
                uhrMaxJpegResolution, maxDefaultJpegResolution, maxJpegBufferSize);
        chosenMaxJpegResolution = uhrMaxJpegResolution;
    }
    assert(kMinJpegBufferSize < maxJpegBufferSize);

    //确定缩放因子
    //Calculate final jpeg buffer size for the given resolution
    float scaleFactor = ((float) (width * height)) /
            (chosenMaxJpegResolution.width * chosenMaxJpegResolution.height);
    //计算得到request jpeg bufferSize
    ssize_t jpegBufferSize = scaleFactor * (maxJpegBufferSize - kMinJpegBufferSize) +
            kMinJpegBufferSize);
    //校正request jpeg bufferSize,以确保jpeg bufferSize在最大值和最小值之间
    if (jpegBufferSize > maxJpegBufferSize) {
        ALOGI("%s: jpeg buffer size calculated is > maxJpeg bufferSize(%zd), clamping",
            __FUNCTION__, maxJpegBufferSize);
        jpegBufferSize = maxJpegBufferSize;
    }
    return jpegBufferSize;
}

Camera Demo AIDL HAL相关的相机特征参数

Demo AIDL HAL中相机特征参数是从.json中加载的,这里以后置相机为例,特征参数如下:

javascript 复制代码
//hardware/google/camera/devices/EmulatedCamera/hwl/configs/emu_camera_back.json
"android.scaler.availableStreamConfigurations" : [
    ...
    "33",
    "1856",
    "1392",
    "OUTPUT",
    "33",
    "1280",
    "720",
    "OUTPUT",
    ...
],

"android.jpeg.maxSize" : [
    "300000"
],
...
相关推荐
-优势在我5 小时前
Android TabLayout 实现随意控制item之间的间距
android·java·ui
hedalei5 小时前
android13修改系统Launcher不跟随重力感应旋转
android·launcher
Indoraptor6 小时前
Android Fence 同步框架
android
峥嵘life6 小时前
DeepSeek本地搭建 和 Android
android
jiasting6 小时前
Android 中 如何监控 某个磁盘有哪些进程或线程在持续的读写
android
AnalogElectronic9 小时前
问题记录,在使用android studio 构建项目时遇到的问题
android·ide·android studio
我爱松子鱼9 小时前
mysql之InnoDB Buffer Pool 深度解析与性能优化
android·mysql·性能优化
江上清风山间明月12 小时前
Flutter开发的应用页面非常多时如何高效管理路由
android·flutter·路由·页面管理·routes·ongenerateroute
子非衣16 小时前
MySQL修改JSON格式数据示例
android·mysql·json