本篇文章的目的是因为针对目前开源内容并没有找到太多相关知识内容,自己对于这部分也在学习,来针对这些技术栈展开一些探讨和提升。
首先来引入一下拼接图像过程的一些思想以及方法,如下:
主要的拼接思想是通过整合多个重叠的图像区域来创建一个无缝的、全景的图像。
1. 图像重叠与拼接区域
-
重叠区域:不同摄像头拍摄的图像通常会有一定的重叠区域,这部分重叠区域用于拼接时的图像融合。拼接算法需要识别和处理这些重叠区域,以便能够无缝地将它们组合成一个完整的图像。
-
拼接区域的划分:图像被划分为多个拼接区域,每个区域包括重叠部分和非重叠部分。每个重叠区域都需要处理,以确保最终拼接效果的平滑和自然。
2. 图像融合
-
融合算法:为了将重叠的区域平滑地结合起来,通常使用一些融合算法。这些算法可以基于颜色、亮度、纹理等特征来处理重叠区域,以减少接缝和视觉不连续性。
-
融合宽度和高度:在处理图像时,需要计算每个拼接区域的融合宽度和高度。这些参数决定了图像重叠的多少,以及如何在重叠区域进行平滑处理。
3. 生成和应用 LUT
-
2D LUT(查找表):在图像拼接中,2D LUT 是一种用于调整图像颜色和亮度的工具。拼接算法可以生成这些 LUT,并将其应用于图像处理模块中,以确保拼接后的图像颜色和亮度的一致性。
-
LUT 更新:生成的 LUT 需要更新到视频处理引擎(VPE)或其他相关模块中,以便在显示或进一步处理图像时应用这些调整。
4. 拼接处理步骤
以下是具体的拼接处理步骤:
-
解析拼接参数:从输入图像或视频帧中提取拼接参数,如重叠区域的宽度、高度等。
-
计算图像偏移:计算每个图像的虚拟内存地址,以便在内存中正确定位各个拼接区域。
-
处理拼接输入:将计算得到的拼接信息传递给智能拼接模块,并执行拼接操作。这通常包括调用特定的拼接处理函数,并传递图像数据和拼接参数。
-
获取拼接结果:从拼接模块获取处理后的结果,如 LUT,并将其应用到图像处理模块中。
-
更新处理模块:将获取到的 LUT 更新到视频处理引擎(VPE)或其他处理模块中,以确保最终图像的拼接效果。
重复帧的处理
- 重复帧:在某些情况下,摄像头可能会生成重复的帧或相似的帧,这些帧可以通过拼接算法进行处理。处理重复帧的目标是确保拼接图像的连续性和一致性,避免视觉上的不连贯。
图像融合拼接方法是将来自不同视角或不同摄像头的图像合并成一个无缝的全景图像。通过这种方法,可以扩展视野或合成更大范围的图像。
综上所述,编程思维如下所示的流程:
初始化:初始化拼接信息结构体和状态数据。
解析参数 :从视频帧的 reserved
数据中提取拼接相关的参数,如当前帧编号、重叠区域的宽度和高度等。
计算地址:计算每个拼接区域的 Y 和 UV 图像内存的虚拟地址。
处理拼接:将拼接信息传递给智能拼接处理模块,并获取处理结果。
更新 LUT:获取 2D LUT 结果并更新 DRE 融合映射。
缓存刷新:刷新缓存并取消映射虚拟内存。
设置 VPE 参数:设置 VPE 的 2D LUT 参数并返回操作结果。
cpp
#if SMART_STITCH
static HD_RESULT smart_stitch_trig(HD_VIDEO_FRAME* p_video_frame, VIDEO_RECORD *p_stream)
{
SMART_STITCH_INFO smart_stitch_info = {0};
UINT32 overlap_idx, current_frame_num;
UINT32 blend_width[MAX_VSP_BLEND_FRAME_NUM] = {0};
UINT32 blend_height[MAX_VSP_BLEND_FRAME_NUM] = {0};
ULONG y_addr_l[VSP_MAX_BLEND_FRAME_NUM] = {0};
ULONG y_addr_r[VSP_MAX_BLEND_FRAME_NUM] = {0};
ULONG uv_addr_l[VSP_MAX_BLEND_FRAME_NUM] = {0};
ULONG uv_addr_r[VSP_MAX_BLEND_FRAME_NUM] = {0};
// Parsing stitch parameters
current_frame_num = ((p_video_frame->reserved[6] >> 16) & 0xF);
for (overlap_idx = 0; overlap_idx < MAX_VSP_BLEND_FRAME_NUM; overlap_idx++) {
blend_width[overlap_idx] = (p_video_frame->reserved[2] >> (16 * overlap_idx)) & 0xFFFF;
blend_height[overlap_idx] = p_video_frame->ph[0];
}
// Calculate line offsets
UINT32 y_lofs = 0;
UINT32 uv_lofs = 0;
for (overlap_idx = 0; overlap_idx < vsp_overlap_num; overlap_idx++) {
y_lofs += blend_width[overlap_idx] << 1;
uv_lofs += blend_width[overlap_idx] << 1;
}
// Memory address calculations
UINTPTR phy_addr_frame = hd_common_mem_blk2pa(p_video_frame->blk);
if (phy_addr_frame == 0) return HD_ERR_SYS;
ULONG vir_addr_frame = (ULONG)hd_common_mem_mmap(HD_COMMON_MEM_MEM_TYPE_CACHE, phy_addr_frame, DBGINFO_BUFSIZE() + VDO_YUV_BUFSIZE(y_lofs, blend_height[0], HD_VIDEO_PXLFMT_YUV420));
if (vir_addr_frame == 0) return HD_ERR_SYS;
ULONG y_va = PHY2VIRT_YUV(p_video_frame->phy_addr[0]);
ULONG uv_va = PHY2VIRT_YUV(p_video_frame->phy_addr[1]);
// Calculate virtual memory addresses for overlapping images
UINT32 anchorX = (vsp_overlap_num == vsp_blend_frame_num) ? blend_width[vsp_overlap_num - 1] : 0;
for (overlap_idx = 0; overlap_idx < vsp_overlap_num; overlap_idx++) {
y_addr_l[overlap_idx] = y_va + anchorX;
y_addr_r[overlap_idx] = y_va + anchorX + blend_width[overlap_idx];
uv_addr_l[overlap_idx] = uv_va + anchorX;
uv_addr_r[overlap_idx] = uv_va + anchorX + blend_width[overlap_idx];
anchorX += blend_width[overlap_idx] << 1;
}
// Process input for smart stitching
for (overlap_idx = 0; overlap_idx < vsp_overlap_num; overlap_idx++) {
smart_stitch_info.frame_num = current_frame_num;
smart_stitch_info.y_lofs_l = y_lofs;
smart_stitch_info.y_lofs_r = y_lofs;
smart_stitch_info.uv_lofs_l = uv_lofs;
smart_stitch_info.uv_lofs_r = uv_lofs;
smart_stitch_info.blend_height = blend_height[overlap_idx];
smart_stitch_info.blend_width = blend_width[overlap_idx];
smart_stitch_info.y_addr_l = y_addr_l[overlap_idx];
smart_stitch_info.y_addr_r = y_addr_r[overlap_idx];
smart_stitch_info.uv_addr_l = uv_addr_l[overlap_idx];
smart_stitch_info.uv_addr_r = uv_addr_r[overlap_idx];
if (smart_stitch2_process_input((void *)My_stitchers[overlap_idx], &smart_stitch_info) != HD_OK) {
return HD_ERR_SYS;
}
}
// Obtain and update LUT
g_lut2d_buf_idx = (g_lut2d_buf_idx + 1) % LUT2D_BUFNUM;
for (overlap_idx = 0; overlap_idx < vsp_overlap_num; overlap_idx++) {
if (smart_stitch2_process_output((void *)My_stitchers[overlap_idx], g_lut2d_lbuf_pool[overlap_idx][g_lut2d_buf_idx], g_lut2d_rbuf_pool[overlap_idx][g_lut2d_buf_idx]) != HD_OK) {
return HD_ERR_SYS;
}
g_lut2d_l_addr[overlap_idx] = g_lut2d_lbuf_pool[overlap_idx][g_lut2d_buf_idx];
g_lut2d_r_addr[overlap_idx] = g_lut2d_rbuf_pool[overlap_idx][g_lut2d_buf_idx];
}
hd_common_mem_flush_cache((void*)g_lut2d_va, g_smart_stitch_LUT_mem_size);
// Update DRE fusion map
for (overlap_idx = 0; overlap_idx < vsp_overlap_num; overlap_idx++) {
UINT32 blk_size = VDO_YUV_BUFSIZE(blend_width[overlap_idx], blend_height[overlap_idx], HD_VIDEO_PXLFMT_Y8);
hd_common_mem_flush_cache((void*)(p_stream->fusion_va[overlap_idx]), blk_size);
}
if (hd_common_mem_munmap((void *)vir_addr_frame, DBGINFO_BUFSIZE() + VDO_YUV_BUFSIZE(y_lofs, blend_height[0], HD_VIDEO_PXLFMT_YUV420)) != HD_OK) {
return HD_ERR_SYS;
}
// Set VPE parameters
for (overlap_idx = 0; overlap_idx < vsp_blend_frame_num; overlap_idx++) {
if (vendor_vpe_set_cmd(VPET_ITEM_2DLUT_PARAM, &lut2d[overlap_idx]) != HD_OK) {
return HD_ERR_SYS;
}
}
return HD_OK;
}
#endif