浅析Android中View的硬件绘制流程

前言

在《浅析Android View绘制过程中的Surface》中分析到,View绘制渲染生成的图形数据需要在进程间传输,最终才能完成合成上屏。基于跨进程的数据传输,生产者(通常是App进程)生产图形数据并交由SurfaceFlinger进程消费,而生产图形数据的实现方式经过多个Android版本迭代之后,最终分为了软件渲染和硬件渲染两种实现。从Android 3.0开始支持硬件加速,Android 4.0开始默认开启硬件加速。在《浅析Android中View的软件绘制流程》分析了软件绘制的流程,下面会从硬件渲染机制来分析View的绘制流程。

硬件绘制

为了提升Android系统的渲染能力,从Android 3.0开始支持硬件加速,Android 4.0默认开启硬件加速,Android 4.1引入VSYNC以及Triple Buffers机制,在Android 4.2还引入了过度渲染监控工具。为了进一步提升渲染性能,Android 5.0开始引入RenderNode以及RenderThread分别用于减少重复绘制以及异步处理GL命令,并且在Android 7.0引入Vulkan硬件渲染引擎。

对于硬件渲染机制而言,ThreadedRenderer在硬件绘制流程中起到重要的作用,因此先对ThreadedRenderer的创建进行分析,在《浅析Android中的Choreographer工作原理》中有分析到,在调度VSYNC信号之前会通过ViewRootImpl#setView方法将DecorViewViewRootImpl进行关联,并且会创建ThreadedRenderer实例用于后续的硬件渲染流程。

ThreadedRenderer的创建

java 复制代码
public final class ViewRootImpl implements ViewParent, View.AttachInfo.Callbacks, ThreadedRenderer.DrawCallbacks, AttachedSurfaceControl {
	public void setView(View view, WindowManager.LayoutParams attrs, View panelParentView, int userId) {
		synchronized (this) {
            if (mView == null) {
                mView = view;
				// ...
	
				// mSurfaceHolder == null
                if (mSurfaceHolder == null) {
                    // While this is supposed to enable only, it can effectively disable
                    // the acceleration too.
                    enableHardwareAcceleration(attrs);
                    final boolean useMTRenderer = MT_RENDERER_AVAILABLE && mAttachInfo.mThreadedRenderer != null;
                    if (mUseMTRenderer != useMTRenderer) {
                        // Shouldn't be resizing, as it's done only in window setup,
                        // but end just in case.
                        endDragResizing();
                        mUseMTRenderer = useMTRenderer;
                    }
                }
				// ...		
			}
		}
	}

	private void enableHardwareAcceleration(WindowManager.LayoutParams attrs) {
        mAttachInfo.mHardwareAccelerated = false;
        mAttachInfo.mHardwareAccelerationRequested = false;

        // Don't enable hardware acceleration when the application is in compatibility mode
        if (mTranslator != null) return;

        // Try to enable hardware acceleration if requested
        final boolean hardwareAccelerated = (attrs.flags & WindowManager.LayoutParams.FLAG_HARDWARE_ACCELERATED) != 0;

        if (hardwareAccelerated) {
            final boolean forceHwAccelerated = (attrs.privateFlags & WindowManager.LayoutParams.PRIVATE_FLAG_FORCE_HARDWARE_ACCELERATED) != 0;
			// 开启硬件加速的情况
            if (ThreadedRenderer.sRendererEnabled || forceHwAccelerated) {
                if (mAttachInfo.mThreadedRenderer != null) {
                    mAttachInfo.mThreadedRenderer.destroy();
                }

                final Rect insets = attrs.surfaceInsets;
                final boolean hasSurfaceInsets = insets.left != 0 || insets.right != 0 || insets.top != 0 || insets.bottom != 0;
                final boolean translucent = attrs.format != PixelFormat.OPAQUE || hasSurfaceInsets;
                // 1. 创建ThreadedRenderer对象并赋值给mAttachInfo.mThreadedRenderer
                mAttachInfo.mThreadedRenderer = ThreadedRenderer.create(mContext, translucent, attrs.getTitle().toString());
                updateColorModeIfNeeded(attrs.getColorMode());
                updateForceDarkMode();
                if (mAttachInfo.mThreadedRenderer != null) {
                	// 2. 更新硬件加速相关的字段
                    mAttachInfo.mHardwareAccelerated = mAttachInfo.mHardwareAccelerationRequested = true;
                    if (mHardwareRendererObserver != null) {
                        mAttachInfo.mThreadedRenderer.addObserver(mHardwareRendererObserver);
                    }
                    // 3. 将mSurfaceControl以及mBlastBufferQueue设置到mThreadedRenderer中,用于后续的绘制渲染
                    mAttachInfo.mThreadedRenderer.setSurfaceControl(mSurfaceControl);
                    mAttachInfo.mThreadedRenderer.setBlastBufferQueue(mBlastBufferQueue);
                }

				// ...
            }
        }
    }
	// ...
}

根据源码可以看出,在调度VSYNC信号之前会创建ThreadedRenderer对象并将其保存在mAttachInfo中,同时更新mAttachInfo中硬件加速相关的字段,最后将mSurfaceControl以及mBlastBufferQueue设置到mThreadedRenderer中,用于后续的绘制渲染。

下面看下ThreadedRenderer对象的创建过程都做了哪些事情。

java 复制代码
/**
 * ThreadedRenderer将渲染工作放到了一个render线程中执行,UI线程可以阻塞render线程,但是render线程不能阻塞UI线程。
 * ThreadedRenderer创建了RenderProxy实例,RenderProxy创建并管理了render线程中的CanvasContext,CanvasContext完全由RenderProxy的生命周期所管理。
 */
public final class ThreadedRenderer extends HardwareRenderer {
	
    /**
     * 使用OpenGL创建一个ThreadedRenderer实例.
     *
     * @param translucent True if the surface is translucent, false otherwise
     *
     * @return A threaded renderer backed by OpenGL.
     */
    public static ThreadedRenderer create(Context context, boolean translucent, String name) {
        return new ThreadedRenderer(context, translucent, name);
    }

	ThreadedRenderer(Context context, boolean translucent, String name) {
		// 调用了HardwareRenderer的构造函数
        super();
        setName(name);
        setOpaque(!translucent);

        final TypedArray a = context.obtainStyledAttributes(null, R.styleable.Lighting, 0, 0);
        mLightY = a.getDimension(R.styleable.Lighting_lightY, 0);
        mLightZ = a.getDimension(R.styleable.Lighting_lightZ, 0);
        mLightRadius = a.getDimension(R.styleable.Lighting_lightRadius, 0);
        float ambientShadowAlpha = a.getFloat(R.styleable.Lighting_ambientShadowAlpha, 0);
        float spotShadowAlpha = a.getFloat(R.styleable.Lighting_spotShadowAlpha, 0);
        a.recycle();
        setLightSourceAlpha(ambientShadowAlpha, spotShadowAlpha);
    }
}

可以看出,ThreadedRenderer对象的创建过程中比较关键的部分是父类HardwareRenderer的构造函数。

java 复制代码
/**
 * 创建一个硬件加速的render实例,用于将RenderNode渲染到Surface上。
 * 所有的HardwareRenderer实例共享一个render线程。render线程包含用于GPU加速渲染所需的GPU上下文以及资源。
 * 第一个HardwareRenderer创建的同时还伴随着创建GPU上下文的开销,但是之后的每个HardwareRenderer实例的创建开销小。
 * 推荐的使用方式是每一个处于使用状态的Surface对象共享同一个HardwareRenderer实例。
 * 比如,当一个Activity展示一个Dialog时,系统内部会使用两个HardwareRenderer实例,可能两个HardwareRenderer同时都在绘制。 
 */
public class HardwareRenderer {
	protected RenderNode mRootNode; // 根节点
	private final long mNativeProxy; // native层的渲染代理对象
	
    public HardwareRenderer() {
    	// 初始化Context
        ProcessInitializer.sInstance.initUsingContext();
        // 1. 在native层创建RenderNode对象,根据返回的句柄值创建Java层的RenderNode对象(根节点)
        mRootNode = RenderNode.adopt(nCreateRootRenderNode());
        mRootNode.setClipToBounds(false);
        // 2. 调用nCreateProxy在native层创建一个渲染代理对象,返回句柄值
        mNativeProxy = nCreateProxy(!mOpaque, mRootNode.mNativeRenderNode);
        if (mNativeProxy == 0) {
            throw new OutOfMemoryError("Unable to create hardware renderer");
        }
        Cleaner.create(this, new DestroyContextRunnable(mNativeProxy));
        // 3. 根据native层的渲染代理对象对
        ProcessInitializer.sInstance.init(mNativeProxy);
    }

    private static native long nCreateRootRenderNode();

    public static RenderNode adopt(long nativePtr) {
        return new RenderNode(nativePtr);
    }

	private static native long nCreateProxy(boolean translucent, long rootRenderNode);

	private static class ProcessInitializer {
		
		synchronized void init(long renderProxy) {
            if (mInitialized) return;
            mInitialized = true;
			// 初始化render线程信息
            initSched(renderProxy);
            // 请求buffer并将对应的fd传设置到native层
            initGraphicsStats();
        }
	}
}

分析源码可知,ThreadedRenderer对象的创建主要包含:

  • 创建根渲染节点
  • 创建渲染代理对象

native层的根渲染节点的创建

java 复制代码
public class HardwareRenderer {
	protected RenderNode mRootNode; // 根节点
	// ...
	
	public HardwareRenderer() {
    	// 初始化Context
        ProcessInitializer.sInstance.initUsingContext();
        // 1. 在native层创建RenderNode对象,根据返回的句柄值创建Java层的RenderNode对象(根节点)
        mRootNode = RenderNode.adopt(nCreateRootRenderNode());
        mRootNode.setClipToBounds(false);
        // 2. 调用nCreateProxy在native层创建一个渲染代理对象,返回句柄值
        // ...
        // 3. 根据native层的渲染代理对象对
        // ...
    }
    
	private static native long nCreateRootRenderNode();
}

public final class RenderNode {
    // native层的RenderNode的句柄值
    public final long mNativeRenderNode;

    private RenderNode(long nativePtr) {
        mNativeRenderNode = nativePtr;
        NoImagePreloadHolder.sRegistry.registerNativeAllocation(this, mNativeRenderNode);
        mAnimationHost = null;
    }
	// ...
}

可以看到Java层的RenderNode其实是一个壳,其内部实现还是在native层,即通过nCreateRootRenderNode方法调用到native层进行RenderNode的创建。

cpp 复制代码
// frameworks/base/libs/hwui/jni/android_graphics_HardwareRenderer.cpp
static jlong android_view_ThreadedRenderer_createRootRenderNode(JNIEnv* env, jobject clazz) {
    RootRenderNode* node = new RootRenderNode(std::make_unique<JvmErrorReporter>(env));
    node->incStrong(0);
    node->setName("RootRenderNode");
    return reinterpret_cast<jlong>(node);
}

// frameworks/base/libs/hwui/RootRenderNode.h
class RootRenderNode : public RenderNode {
public:
    explicit RootRenderNode(std::unique_ptr<ErrorHandler> errorHandler)
            : RenderNode(), mErrorHandler(std::move(errorHandler)) {}
	// ...
}

// frameworks/base/libs/hwui/RenderNode.cpp
RenderNode::RenderNode()
        : mUniqueId(generateId())
        , mDirtyPropertyFields(0)
        , mNeedsDisplayListSync(false)
        , mDisplayList(nullptr)
        , mStagingDisplayList(nullptr)
        , mAnimatorManager(*this)
        , mParentCount(0) {}

最终创建了一个native层的RenderNode对象,并将其句柄值返回给Java层的RenderNode对象,并更新到其mNativeRenderNode成员变量,

渲染代理对象的创建

渲染代理对象用于负责处理Java层的渲染请求,下面看下渲染代理的创建过程。

java 复制代码
public class HardwareRenderer {
	protected RenderNode mRootNode; // 根节点
	private final long mNativeProxy; // native层的渲染代理对象
	
    public HardwareRenderer() {
    	// 初始化Context
        ProcessInitializer.sInstance.initUsingContext();
        // 1. 在native层创建RenderNode对象,根据返回的句柄值创建Java层的RenderNode对象(根节点)
        // ...
        // 2. 调用nCreateProxy在native层创建一个渲染代理对象,返回句柄值
        mNativeProxy = nCreateProxy(!mOpaque, mRootNode.mNativeRenderNode);
        // ...
        // 3. 根据native层的渲染代理对象对
        ProcessInitializer.sInstance.init(mNativeProxy);
    }

	private static native long nCreateProxy(boolean translucent, long rootRenderNode);

可以看到渲染代理对象是在native层创建的,并且在创建渲染代理对象时用到了native层的RenderNode

cpp 复制代码
// frameworks/base/libs/hwui/jni/android_graphics_HardwareRenderer.cpp
static jlong android_view_ThreadedRenderer_createProxy(JNIEnv* env, jobject clazz, jboolean translucent, jlong rootRenderNodePtr) {
	// 获取之前创建的RootRenderNode对象
    RootRenderNode* rootRenderNode = reinterpret_cast<RootRenderNode*>(rootRenderNodePtr);
    // 创建ContextFactoryImpl对象,ContextFactoryImpl对象持有了rootRenderNode
    ContextFactoryImpl factory(rootRenderNode);
    // 创建RenderProxy对象
    RenderProxy* proxy = new RenderProxy(translucent, rootRenderNode, &factory);
    return (jlong) proxy;
}

// frameworks/base/libs/hwui/renderthread/RenderProxy.cpp
RenderProxy::RenderProxy(bool translucent, RenderNode* rootRenderNode, IContextFactory* contextFactory) : mRenderThread(RenderThread::getInstance()), mContext(nullptr) {
#ifdef __ANDROID__
    pid_t uiThreadId = pthread_gettid_np(pthread_self());
#else
    pid_t uiThreadId = 0;
#endif
    pid_t renderThreadId = getRenderThreadTid();
    mContext = mRenderThread.queue().runSync([=, this]() -> CanvasContext* {
        CanvasContext* context = CanvasContext::create(mRenderThread, translucent, rootRenderNode, contextFactory, uiThreadId, renderThreadId);
        if (context != nullptr) {
            mRenderThread.queue().post([=] { context->startHintSession(); });
        }
        return context;
    });
    mDrawFrameTask.setContext(&mRenderThread, mContext, rootRenderNode);
}

创建RenderProxy对象会通过RenderThread::getInstance()获取RenderThread,可见App进程内只有一个渲染线程RenderThread。然后向RenderThread提交一个任务来创建CanvasContext对象,之后调用了DrawFrameTask对象的setContext方法,将RenderThreadCanvasContext以及RootRenderNode设置给DrawFrameTask对象。

下面是CanvasContext对象的创建方法,可以看到主要是根据系统设置的渲染管线类型来创建最终的CanvasContext,不同的渲染管线类型对应不同的渲染管线的实现。可以看到支持OpenGLVulkan两种硬件渲染引擎以及Skia软件渲染引擎。

cpp 复制代码
// frameworks/base/libs/hwui/renderthread/CanvasContext.cpp
CanvasContext* CanvasContext::create(RenderThread& thread, bool translucent, RenderNode* rootRenderNode, IContextFactory* contextFactory, pid_t uiThreadId, pid_t renderThreadId) {
    auto renderType = Properties::getRenderPipelineType();

    switch (renderType) {
        case RenderPipelineType::SkiaGL:
            return new CanvasContext(thread, translucent, rootRenderNode, contextFactory,
                                     std::make_unique<skiapipeline::SkiaOpenGLPipeline>(thread),
                                     uiThreadId, renderThreadId);
        case RenderPipelineType::SkiaVulkan:
            return new CanvasContext(thread, translucent, rootRenderNode, contextFactory,
                                     std::make_unique<skiapipeline::SkiaVulkanPipeline>(thread),
                                     uiThreadId, renderThreadId);
#ifndef __ANDROID__
        case RenderPipelineType::SkiaCpu:
            return new CanvasContext(thread, translucent, rootRenderNode, contextFactory,
                                     std::make_unique<skiapipeline::SkiaCpuPipeline>(thread),
                                     uiThreadId, renderThreadId);
#endif
        default:
            LOG_ALWAYS_FATAL("canvas context type %d not supported", (int32_t)renderType);
            break;
    }
    return nullptr;
}

绑定Surface

从上面的分析来看,ViewRootImpl#setView方法完成了ThreadedRenderer对象的创建,并且完成了渲染相关的准备工作,但是没有对绘制使用的Surface进行任何处理。之前分析软件绘制的流程时发现Surface的绑定是在Surface可用之后进行的,因此我们看下Surface可用之后是否有相关的逻辑。

java 复制代码
public final class ViewRootImpl implements ViewParent, View.AttachInfo.Callbacks, ThreadedRenderer.DrawCallbacks, AttachedSurfaceControl {
    private void performTraversals() {
    	// ...
        if (mFirst || windowShouldResize || viewVisibilityChanged || params != null || mForceNextWindowRelayout) {
			// ...
			try {
				// ...
				relayoutResult = relayoutWindow(params, viewVisibility, insetsPending);
				// ...
				if (surfaceCreated) {
					if (mAttachInfo.mThreadedRenderer != null) {
                        try {
                            hwInitialized = mAttachInfo.mThreadedRenderer.initialize(mSurface);
                            if (hwInitialized && (host.mPrivateFlags & View.PFLAG_REQUEST_TRANSPARENT_REGIONS) == 0) {
                                // Don't pre-allocate if transparent regions
                                // are requested as they may not be needed
                                mAttachInfo.mThreadedRenderer.allocateBuffers();
                            }
                        } catch (OutOfResourcesException e) {
                            handleOutOfResourcesException(e);
                            return;
                        }
                    }
				}
			} 
		}
	}
}

可以看到Surface创建之后会调用ThreadedRenderer#initialize

java 复制代码
public final class ThreadedRenderer extends HardwareRenderer {
    /**
     * Initializes the threaded renderer for the specified surface.
     *
     * @param surface The surface to render
     *
     * @return True if the initialization was successful, false otherwise.
     */
    boolean initialize(Surface surface) throws OutOfResourcesException {
        boolean status = !mInitialized;
        mInitialized = true;
        updateEnabledState(surface);
        setSurface(surface);
        return status;
    }

	    @Override
    public void setSurface(Surface surface) {
        // TODO: Do we ever pass a non-null but isValid() = false surface?
        // This is here to be super conservative for ViewRootImpl
        if (surface != null && surface.isValid()) {
            super.setSurface(surface);
        } else {
            super.setSurface(null);
        }
    }
}

public class HardwareRenderer {
    /**
     * See {@link #setSurface(Surface)}
     *
     * @hide
     * @param discardBuffer determines whether the surface will attempt to preserve its contents
     *                      between frames.  If set to true the renderer will attempt to preserve
     *                      the contents of the buffer between frames if the implementation allows
     *                      it.  If set to false no attempt will be made to preserve the buffer's
     *                      contents between frames.
     */
    public void setSurface(@Nullable Surface surface, boolean discardBuffer) {
        if (surface != null && !surface.isValid()) {
            throw new IllegalArgumentException("Surface is invalid. surface.isValid() == false.");
        }
        // discardBuffer为false
        nSetSurface(mNativeProxy, surface, discardBuffer);
    }

    private static native void nSetSurface(long nativeProxy, Surface window, boolean discardBuffer);

}

最终还是调用到了native层,跟进去看下nSetSurface方法。

cpp 复制代码
// frameworks/base/libs/hwui/jni/android_graphics_HardwareRenderer.cpp
static void android_view_ThreadedRenderer_setSurface(JNIEnv* env, jobject clazz, jlong proxyPtr, jobject jsurface, jboolean discardBuffer) {
    RenderProxy* proxy = reinterpret_cast<RenderProxy*>(proxyPtr);
    ANativeWindow* window = nullptr;
    if (jsurface) {
        window = fromSurface(env, jsurface);
    }
    bool enableTimeout = true;
    // ...
    // 设置Surface
    proxy->setSurface(window, enableTimeout);
    if (window) {
        ANativeWindow_release(window);
    }
}

// frameworks/base/libs/hwui/renderthread/RenderProxy.cpp
void RenderProxy::setSurface(ANativeWindow* window, bool enableTimeout) {
    if (window) { ANativeWindow_acquire(window); }
    mRenderThread.queue().post([this, win = window, enableTimeout]() mutable {
        mContext->setSurface(win, enableTimeout);
        if (win) { ANativeWindow_release(win); }
    });
}

// frameworks/base/libs/hwui/renderthread/CanvasContext.cpp
void CanvasContext::setSurface(ANativeWindow* window, bool enableTimeout) {
    ATRACE_CALL();

    startHintSession();
    if (window) {
        mNativeSurface = std::make_unique<ReliableSurface>(window);
        mNativeSurface->init();
        if (enableTimeout) {
            // TODO: Fix error handling & re-shorten timeout
            ANativeWindow_setDequeueTimeout(window, 4000_ms);
        }
    } else {
        mNativeSurface = nullptr;
    }
    setupPipelineSurface();
}

void CanvasContext::setupPipelineSurface() {
 	// 将surface设置给渲染管线
    bool hasSurface = mRenderPipeline->setSurface(mNativeSurface ? mNativeSurface->getNativeWindow() : nullptr, mSwapBehavior);

    if (mNativeSurface && !mNativeSurface->didSetExtraBuffers()) {
        setBufferCount(mNativeSurface->getNativeWindow());
    }

    mFrameNumber = 0;

    if (mNativeSurface != nullptr && hasSurface) {
        mHaveNewSurface = true;
        mSwapHistory.clear();
        // Enable frame stats after the surface has been bound to the appropriate graphics API.
        // Order is important when new and old surfaces are the same, because old surface has
        // its frame stats disabled automatically.
        native_window_enable_frame_timestamps(mNativeSurface->getNativeWindow(), true);
        native_window_set_scaling_mode(mNativeSurface->getNativeWindow(),
                                       NATIVE_WINDOW_SCALING_MODE_FREEZE);
    } else {
        mRenderThread.removeFrameCallback(this);
        mGenerationID++;
    }
}

可以看到,最终将Surface设置给了渲染管线,后续渲染管线就可以使用Surface了。

View的绘制分发

开启硬件加速之后,View的绘制是通过ThreadedRenderer#draw方法来实现的,下面跟一下源码看下,ThreadedRenderer#draw方法内部具体做了哪些事情。

java 复制代码
public final class ViewRootImpl implements ViewParent, View.AttachInfo.Callbacks, ThreadedRenderer.DrawCallbacks, AttachedSurfaceControl {
	// ...
	private boolean performDraw() {
		final boolean fullRedrawNeeded = mFullRedrawNeeded || mSyncBufferCallback != null;
        // ...
        boolean usingAsyncReport = isHardwareEnabled() && mSyncBufferCallback != null;
        // ...
        try {
            boolean canUseAsync = draw(fullRedrawNeeded, usingAsyncReport && mSyncBuffer);
            // ...
        } finally {
            // ...
        }
        // ...
    }

    private boolean draw(boolean fullRedrawNeeded, boolean forceDraw) {
        Surface surface = mSurface;
        // surface不可用时直接return,surface在经过relayoutWindow之后已经被更新并处于可用状态
        if (!surface.isValid()) {
            return false;
        }

		// ...
        final Rect dirty = mDirty;
        if (fullRedrawNeeded) {
            dirty.set(0, 0, (int) (mWidth * appScale + 0.5f), (int) (mHeight * appScale + 0.5f));
        }

		// ...
        if (!dirty.isEmpty() || mIsAnimating || accessibilityFocusDirty) {
            if (isHardwareEnabled()) {
            	// ...
            	// 开启硬件绘制时,使用ThreadedRenderer进行绘制
            	mAttachInfo.mThreadedRenderer.draw(mView, mAttachInfo, this);
            } else {
            	// 未开启硬件绘制时,使用软件绘制
                // ...
            }
        }
		// 如果当前正在动画,调度下一次VSYNC信号来执行布局流程
        if (animating) {
            mFullRedrawNeeded = true;
            scheduleTraversals();
        }
        return useAsyncReport;
    }
}

ThreadedRenderer#draw方法的源码可以看出,ThreadedRenderer#draw方法主要做了两件事情:

  • 更新根节点的DisplayList
  • RenderNode树持有的DisplayList同步到渲染线程并请求绘制下一帧
java 复制代码
public final class ThreadedRenderer extends HardwareRenderer {
	// ...
    void draw(View view, AttachInfo attachInfo, DrawCallbacks callbacks) {
        attachInfo.mViewRootImpl.mViewFrameInfo.markDrawStart();
		// 1. 更新根节点的DisplayList
        updateRootDisplayList(view, callbacks);

        // 注册在创建ThreadedRenderer之前就启动的动画渲染节点,这些动画通常在第一次draw之前就开始了。
        if (attachInfo.mPendingAnimatingRenderNodes != null) {
            final int count = attachInfo.mPendingAnimatingRenderNodes.size();
            for (int i = 0; i < count; i++) {
                registerAnimatingRenderNode(attachInfo.mPendingAnimatingRenderNodes.get(i));
            }
            attachInfo.mPendingAnimatingRenderNodes.clear();
            attachInfo.mPendingAnimatingRenderNodes = null;
        }

        final FrameInfo frameInfo = attachInfo.mViewRootImpl.getUpdatedFrameInfo();
		// 2. 将RenderNode树的DisplayList同步到渲染线程并请求绘制下一帧
        int syncResult = syncAndDrawFrame(frameInfo);
        if ((syncResult & SYNC_LOST_SURFACE_REWARD_IF_FOUND) != 0) {
            // 丢失了surface,因此重新发起布局请求,下一次布局时WindowManager会提供新的surface。
            attachInfo.mViewRootImpl.mForceNextWindowRelayout = true;
            attachInfo.mViewRootImpl.requestLayout();
        }
        if ((syncResult & SYNC_REDRAW_REQUESTED) != 0) {
            attachInfo.mViewRootImpl.invalidate();
        }
    }
	
    // 将RenderNode树同步到渲染线程并请求绘制下一帧
    @SyncAndDrawResult
    public int syncAndDrawFrame(@NonNull FrameInfo frameInfo) {
        return nSyncAndDrawFrame(mNativeProxy, frameInfo.frameInfo, frameInfo.frameInfo.length);
    }
}

更新根节点的DisplayList

从源码可以看出,ThreadedRenderer#updateRootDisplayList方法并不只会更新根ViewDisplayList,而是会先遍历View树进行DisplayList的更新,然后再更新根ViewDisplayList

java 复制代码
public final class ThreadedRenderer extends HardwareRenderer {
	// ...
    private void updateRootDisplayList(View view, DrawCallbacks callbacks) {
        // 1. 从根View开始遍历更新View树上的DisplayList
        updateViewTreeDisplayList(view);
		// ...
		// 2. 如果根渲染节点需要更新或者根渲染节点没有DisplayList,则对根渲染节点进行处理
		// 如果在第1步之后还需要更新根渲染节点的话,说明第1步没有处理过根渲染节点
        if (mRootNodeNeedsUpdate || !mRootNode.hasDisplayList()) {
            RecordingCanvas canvas = mRootNode.beginRecording(mSurfaceWidth, mSurfaceHeight);
            try {
                final int saveCount = canvas.save();
                canvas.translate(mInsetLeft, mInsetTop);
                callbacks.onPreDraw(canvas);

                canvas.enableZ();
                // 3. 将根View的DisplayList填充到canvas
                canvas.drawRenderNode(view.updateDisplayListIfDirty());
                canvas.disableZ();

                callbacks.onPostDraw(canvas);
                canvas.restoreToCount(saveCount);
                mRootNodeNeedsUpdate = false;
            } finally {
                mRootNode.endRecording();
            }
        }
        Trace.traceEnd(Trace.TRACE_TAG_VIEW);
    }
	
	// 更新view树上的DisplayList
    private void updateViewTreeDisplayList(View view) {
        view.mPrivateFlags |= View.PFLAG_DRAWN;
        // 更新mRecreateDisplayList,如果view调用过invalidate方法则标记其需要重新创建DisplayList
        view.mRecreateDisplayList = (view.mPrivateFlags & View.PFLAG_INVALIDATED) == View.PFLAG_INVALIDATED;
        view.mPrivateFlags &= ~View.PFLAG_INVALIDATED;
        // 调用View#updateDisplayListIfDirty,方法内部会分发更新的动作
        view.updateDisplayListIfDirty();
        view.mRecreateDisplayList = false;
    }
    // ...
}

每个View实例在创建的时候都会创建自身的RenderNode,在ThreadedRenderer#updateViewTreeDisplayList方法中调用了根ViewupdateDisplayListIfDirty方法之后,首先会判断是否需要重新创建自身的DisplayList,如果不需要则直接调用dispatchGetDisplayList方法将更新操作分发给所有的子View,否则对根View进行必要的绘制,并将绘制操作分发给所有的子View

java 复制代码
public class View implements Drawable.Callback, KeyEvent.Callback, AccessibilityEventSource {

    // 获取这个view的RenderNode实例并根据需要对其DisplayList进行更新
    @NonNull
    @UnsupportedAppUsage(maxTargetSdk = Build.VERSION_CODES.R, trackingBug = 170729553)
    public RenderNode updateDisplayListIfDirty() {
        final RenderNode renderNode = mRenderNode;
        // 只有view已经被attach过并且开启硬件加速才会有DisplayList
        if (!canHaveDisplayList()) {
            return renderNode;
        }
		// 1. 如果绘制缓存无效或者没有DisplayList或者mRecreateDisplayList被设置为true,则需要进一步处理
        if ((mPrivateFlags & PFLAG_DRAWING_CACHE_VALID) == 0 || !renderNode.hasDisplayList() || (mRecreateDisplayList)) {
            // 1.1 不需要重新创建当前View的DisplayList,只需要通知子View恢复或重建他们的DisplayList。
            // 这种情况对应的是(mPrivateFlags & PFLAG_DRAWING_CACHE_VALID) == 0,但是硬件加速开启的情况下,主要根据hasDisplayList以及mRecreateDisplayList来判断是否需要重建DisplayList
            if (renderNode.hasDisplayList() && !mRecreateDisplayList) {
                mPrivateFlags |= PFLAG_DRAWN | PFLAG_DRAWING_CACHE_VALID;
                mPrivateFlags &= ~PFLAG_DIRTY_MASK;
                // 通知子View重建DisplayList
                dispatchGetDisplayList();
                return renderNode; // no work needed
            }

            // 1.2 需要重新创建当前View的DisplayList,将mRecreateDisplayList置为true来保证当调用drawChild方法时可以将子View的DisplayList拷贝到当前View的DisplayList里。
            mRecreateDisplayList = true;

            int width = mRight - mLeft;
            int height = mBottom - mTop;
            int layerType = getLayerType();
            renderNode.clearStretch();
			// 1.2.1 开始记录当前View的绘制命令
            final RecordingCanvas canvas = renderNode.beginRecording(width, height);

            try {
                if (layerType == LAYER_TYPE_SOFTWARE) { // 软件绘制时 
                    // ...
                } else { // 硬件绘制时
                    // ...
                    // 如果当前View是布局类型的并且没有背景,此时直接跳过自身的绘制。
                    if ((mPrivateFlags & PFLAG_SKIP_DRAW) == PFLAG_SKIP_DRAW) {
						// 分发给子View进行绘制
                        dispatchDraw(canvas);
                        // ...
                    } else {
                    	// 绘制自身到canvas
                        draw(canvas);
                    }
                }
            } finally {
            	// 结束记录当前View的绘制命令
                renderNode.endRecording();
                setDisplayListProperties(renderNode);
            }
        } else {
            mPrivateFlags |= PFLAG_DRAWN | PFLAG_DRAWING_CACHE_VALID;
            mPrivateFlags &= ~PFLAG_DIRTY_MASK;
        }
        return renderNode;
    }

    /**
     * View类没有实现此方法,因为只有继承自ViewGroup的View才需要分发给子View。
     * ViewGroup类中遍历所有的子View并调用其updateDisplayListIfDirty方法,由此开启新一轮的分发处理。
     * It is called by getDisplayList() when the parent ViewGroup does not need
     * to recreate its own display list, which would happen if it went through the normal
     * draw/dispatchDraw mechanisms.
     *
     * @hide
     */
    protected void dispatchGetDisplayList() {}

    /**
     * View类没有实现此方法,因为只有继承自ViewGroup的View才需要分发给子View。
     * ViewGroup类中遍历所有的子View并调用其draw方法,由此开启新一轮的分发处理。
     */
    protected void dispatchDraw(Canvas canvas) {

    }
	
	// ...
}
java 复制代码
// android.graphics.RenderNode
public final class RenderNode {
    /**
     * `
     * Ends the recording for this display list. Calling this method marks
     * the display list valid and {@link #hasDisplayList()} will return true.
     *
     * @see #beginRecording(int, int)
     * @see #hasDisplayList()
     */
    public void endRecording() {
        if (mCurrentRecordingCanvas == null) {
            throw new IllegalStateException("No recording in progress, forgot to call #beginRecording()?");
        }
        RecordingCanvas canvas = mCurrentRecordingCanvas;
        mCurrentRecordingCanvas = null;
        canvas.finishRecording(this);
        canvas.recycle();
    }
}

// android.graphics.RecordingCanvas
public final class RecordingCanvas extends BaseRecordingCanvas {
    void finishRecording(RenderNode node) {
        nFinishRecording(mNativeCanvasWrapper, node.mNativeRenderNode);
    }
    
    @CriticalNative
    private static native void nFinishRecording(long renderer, long renderNode);
}    
cpp 复制代码
// frameworks/base/libs/hwui/jni/android_graphics_DisplayListCanvas.cpp
static void android_view_DisplayListCanvas_finishRecording(CRITICAL_JNI_PARAMS_COMMA jlong canvasPtr, jlong renderNodePtr) {
    Canvas* canvas = reinterpret_cast<Canvas*>(canvasPtr);
    RenderNode* renderNode = reinterpret_cast<RenderNode*>(renderNodePtr);
    canvas->finishRecording(renderNode);
}

// frameworks/base/libs/hwui/pipeline/skia/SkiaRecordingCanvas.cpp
void SkiaRecordingCanvas::finishRecording(uirenderer::RenderNode* destination) {
    destination->setStagingDisplayList(uirenderer::DisplayList(finishRecording()));
}

// frameworks/base/libs/hwui/RenderNode.cpp
void RenderNode::setStagingDisplayList(DisplayList&& newData) {
    mValid = newData.isValid();
    mNeedsDisplayListSync = true;
    mStagingDisplayList = std::move(newData);
}

可见updateDisplayListIfDirty方法主要完成了绘制任务的分发以及根View自身的绘制。绘制完成之后会调用RecordingCanvas#finishRecording方法,最终通过JNI调用到了native层的SkiaRecordingCanvas::finishRecording方法,将经过绘制得到的DisplayList设置到了RenderNodemStagingDisplayList变量中。

View自身的绘制是通过draw方法实现的,而软件绘制其实也是通过这个draw方法实现的自身绘制,因此这块就不重复分析了。

java 复制代码
public class View implements Drawable.Callback, KeyEvent.Callback, AccessibilityEventSource {
    /**
     * Manually render this view (and all of its children) to the given Canvas.
     * The view must have already done a full layout before this function is
     * called.  When implementing a view, implement
     * {@link #onDraw(android.graphics.Canvas)} instead of overriding this method.
     * If you do need to override this method, call the superclass version.
     *
     * @param canvas The Canvas to which the View is rendered.
     */
    @CallSuper
    public void draw(Canvas canvas) {
        final int privateFlags = mPrivateFlags;
        mPrivateFlags = (privateFlags & ~PFLAG_DIRTY_MASK) | PFLAG_DRAWN;

        /*
         * Draw traversal performs several drawing steps which must be executed
         * in the appropriate order:
         *
         *      1. Draw the background
         *      2. If necessary, save the canvas' layers to prepare for fading
         *      3. Draw view's content
         *      4. Draw children
         *      5. If necessary, draw the fading edges and restore layers
         *      6. Draw decorations (scrollbars for instance)
         *      7. If necessary, draw the default focus highlight
         */

        // Step 1, draw the background, if needed
        int saveCount;
        drawBackground(canvas);

        // skip step 2 & 5 if possible (common case)
        final int viewFlags = mViewFlags;
        boolean horizontalEdges = (viewFlags & FADING_EDGE_HORIZONTAL) != 0;
        boolean verticalEdges = (viewFlags & FADING_EDGE_VERTICAL) != 0;
        if (!verticalEdges && !horizontalEdges) {
            // Step 3, draw the content
            onDraw(canvas);

            // Step 4, draw the children
            dispatchDraw(canvas);

            drawAutofilledHighlight(canvas);

            // Overlay is part of the content and draws beneath Foreground
            if (mOverlay != null && !mOverlay.isEmpty()) {
                mOverlay.getOverlayView().dispatchDraw(canvas);
            }

            // Step 6, draw decorations (foreground, scrollbars)
            onDrawForeground(canvas);

            // Step 7, draw the default focus highlight
            drawDefaultFocusHighlight(canvas);

            if (isShowingLayoutBounds()) {
                debugDrawFocus(canvas);
            }

            // we're done...
            return;
        }
		// ...
    }

}

总结下整体的流程如下图所示:

同步数据到渲染线程

java 复制代码
public final class ThreadedRenderer extends HardwareRenderer {
	// ...
    void draw(View view, AttachInfo attachInfo, DrawCallbacks callbacks) {
        attachInfo.mViewRootImpl.mViewFrameInfo.markDrawStart();
		// 1. 更新根节点的DisplayList
        // ...

        final FrameInfo frameInfo = attachInfo.mViewRootImpl.getUpdatedFrameInfo();
		// 2. 将RenderNode树同步到渲染线程并请求绘制下一帧
        int syncResult = syncAndDrawFrame(frameInfo);
        if ((syncResult & SYNC_LOST_SURFACE_REWARD_IF_FOUND) != 0) {
            // 丢失了surface,因此重新发起布局请求,下一次布局时WindowManager会提供新的surface。
            attachInfo.mViewRootImpl.mForceNextWindowRelayout = true;
            attachInfo.mViewRootImpl.requestLayout();
        }
        if ((syncResult & SYNC_REDRAW_REQUESTED) != 0) {
            attachInfo.mViewRootImpl.invalidate();
        }
    }
	
    // 将RenderNode树同步到渲染线程并请求绘制下一帧
    @SyncAndDrawResult
    public int syncAndDrawFrame(@NonNull FrameInfo frameInfo) {
        return nSyncAndDrawFrame(mNativeProxy, frameInfo.frameInfo, frameInfo.frameInfo.length);
    }
}

// android.view.ViewRootImpl
public final class ViewRootImpl implements ViewParent, View.AttachInfo.Callbacks, ThreadedRenderer.DrawCallbacks, AttachedSurfaceControl {
    /**
     * Update the Choreographer's FrameInfo object with the timing information for the current
     * ViewRootImpl instance. Erase the data in the current ViewFrameInfo to prepare for the next
     * frame.
     * @return the updated FrameInfo object
     */
    protected @NonNull FrameInfo getUpdatedFrameInfo() {
        // Since Choreographer is a thread-local singleton while we can have multiple
        // ViewRootImpl's, populate the frame information from the current viewRootImpl before
        // starting the draw
        FrameInfo frameInfo = mChoreographer.mFrameInfo;
        mViewFrameInfo.populateFrameInfo(frameInfo);
        mViewFrameInfo.reset();
        mInputEventAssigner.notifyFrameProcessed();
        return frameInfo;
    }
	// ...
}

可以看到完成了绘制命令的收集之后,通过nSyncAndDrawFrame方法调用到了native层,并通过native层的渲染代理进行数据同步,最终将数据同步到了Choreographer持有的frameInfo对象中。

cpp 复制代码
// frameworks/base/libs/hwui/jni/android_graphics_HardwareRenderer.cpp
static int android_view_ThreadedRenderer_syncAndDrawFrame(JNIEnv* env, jobject clazz, jlong proxyPtr, jlongArray frameInfo, jint frameInfoSize) {
	// 获取RenderProxy对象
    RenderProxy* proxy = reinterpret_cast<RenderProxy*>(proxyPtr);
    env->GetLongArrayRegion(frameInfo, 0, frameInfoSize, proxy->frameInfo());
    // 通过RenderProxy对象同步并绘制帧数据
    return proxy->syncAndDrawFrame();
}

// frameworks/base/libs/hwui/renderthread/RenderProxy.cpp
int RenderProxy::syncAndDrawFrame() {
    return mDrawFrameTask.drawFrame();
}

// frameworks/base/libs/hwui/renderthread/DrawFrameTask.cpp
int DrawFrameTask::drawFrame() {
    mSyncResult = SyncResult::OK;
    mSyncQueued = systemTime(SYSTEM_TIME_MONOTONIC);
    postAndWait();
    return mSyncResult;
}

void DrawFrameTask::postAndWait() {
    ATRACE_CALL();
    AutoMutex _lock(mLock);
    mRenderThread->queue().post([this]() { run(); });
    // 阻塞等待数据同步结束
    mSignal.wait(mLock);
}

void DrawFrameTask::run() {
    const int64_t vsyncId = mFrameInfo[static_cast<int>(FrameInfoIndex::FrameTimelineVsyncId)];
    mContext->setSyncDelayDuration(systemTime(SYSTEM_TIME_MONOTONIC) - mSyncQueued);
    mContext->setTargetSdrHdrRatio(mRenderSdrHdrRatio);

    auto hardwareBufferParams = mHardwareBufferParams;
    mContext->setHardwareBufferRenderParams(hardwareBufferParams);
    IRenderPipeline* pipeline = mContext->getRenderPipeline();
    bool canUnblockUiThread;
    bool canDrawThisFrame;
    bool solelyTextureViewUpdates;
    {
        TreeInfo info(TreeInfo::MODE_FULL, *mContext);
        info.forceDrawFrame = mForceDrawFrame;
        mForceDrawFrame = false;
        // 同步帧信息
        canUnblockUiThread = syncFrameState(info);
        canDrawThisFrame = !info.out.skippedFrameReason.has_value();
        solelyTextureViewUpdates = info.out.solelyTextureViewUpdates;

        if (mFrameCommitCallback) {
            mContext->addFrameCommitListener(std::move(mFrameCommitCallback));
            mFrameCommitCallback = nullptr;
        }
    }

    // Grab a copy of everything we need
    CanvasContext* context = mContext;
    std::function<std::function<void(bool)>(int32_t, int64_t)> frameCallback = std::move(mFrameCallback);
    std::function<void()> frameCompleteCallback = std::move(mFrameCompleteCallback);
    mFrameCallback = nullptr;
    mFrameCompleteCallback = nullptr;

    // From this point on anything in "this" is *UNSAFE TO ACCESS*
    if (canUnblockUiThread) {
        unblockUiThread();
    }

    // Even if we aren't drawing this vsync pulse the next frame number will still be accurate
    // ...

    if (CC_LIKELY(canDrawThisFrame)) {
        context->draw(solelyTextureViewUpdates);
    } else {
#ifdef __ANDROID__
        // Do a flush in case syncFrameState performed any texture uploads. Since we skipped
        // the draw() call, those uploads (or deletes) will end up sitting in the queue.
        // Do them now
        if (GrDirectContext* grContext = mRenderThread->getGrContext()) {
            grContext->flushAndSubmit();
        }
#endif
        // wait on fences so tasks don't overlap next frame
        context->waitOnFences();
    }
	// ...
    if (!canUnblockUiThread) {
        unblockUiThread();
    }

    if (pipeline->hasHardwareBuffer()) {
        auto fence = pipeline->flush();
        hardwareBufferParams.invokeRenderCallback(std::move(fence), 0);
    }
}

void DrawFrameTask::unblockUiThread() {
    AutoMutex _lock(mLock);
    mSignal.signal();
}

bool DrawFrameTask::syncFrameState(TreeInfo& info) {
    ATRACE_CALL();
    int64_t vsync = mFrameInfo[static_cast<int>(FrameInfoIndex::Vsync)];
    int64_t intendedVsync = mFrameInfo[static_cast<int>(FrameInfoIndex::IntendedVsync)];
    int64_t vsyncId = mFrameInfo[static_cast<int>(FrameInfoIndex::FrameTimelineVsyncId)];
    int64_t frameDeadline = mFrameInfo[static_cast<int>(FrameInfoIndex::FrameDeadline)];
    int64_t frameInterval = mFrameInfo[static_cast<int>(FrameInfoIndex::FrameInterval)];
    mRenderThread->timeLord().vsyncReceived(vsync, intendedVsync, vsyncId, frameDeadline, frameInterval);
    bool canDraw = mContext->makeCurrent();
    mContext->unpinImages();

#ifdef __ANDROID__
    for (size_t i = 0; i < mLayers.size(); i++) {
        if (mLayers[i]) {
            mLayers[i]->apply();
        }
    }
#endif

    mLayers.clear();
    mContext->setContentDrawBounds(mContentDrawBounds);
    // 准备渲染节点树
    mContext->prepareTree(info, mFrameInfo, mSyncQueued, mTargetNode);

    // This is after the prepareTree so that any pending operations
    // (RenderNode tree state, prefetched layers, etc...) will be flushed.
    bool hasTarget = mContext->hasOutputTarget();
    if (CC_UNLIKELY(!hasTarget || !canDraw)) {
        if (!hasTarget) {
            mSyncResult |= SyncResult::LostSurfaceRewardIfFound;
            info.out.skippedFrameReason = SkippedFrameReason::NoOutputTarget;
        } else {
            // If we have a surface but can't draw we must be stopped
            mSyncResult |= SyncResult::ContextIsStopped;
            info.out.skippedFrameReason = SkippedFrameReason::ContextIsStopped;
        }
    }

    if (info.out.hasAnimations) {
        if (info.out.requiresUiRedraw) {
            mSyncResult |= SyncResult::UIRedrawRequired;
        }
    }
    if (info.out.skippedFrameReason) {
        mSyncResult |= SyncResult::FrameDropped;
    }
    // If prepareTextures is false, we ran out of texture cache space
    return info.prepareTextures;
}
cpp 复制代码
// frameworks/base/libs/hwui/renderthread/CanvasContext.cpp
void CanvasContext::prepareTree(TreeInfo& info, int64_t* uiFrameInfo, int64_t syncQueued,
                                RenderNode* target) {
    mRenderThread.removeFrameCallback(this);

    // If the previous frame was dropped we don't need to hold onto it, so
    // just keep using the previous frame's structure instead
    // ...

    mCurrentFrameInfo->importUiThreadInfo(uiFrameInfo);
    mCurrentFrameInfo->set(FrameInfoIndex::SyncQueued) = syncQueued;
    mCurrentFrameInfo->markSyncStart();

    info.damageAccumulator = &mDamageAccumulator;
    info.layerUpdateQueue = &mLayerUpdateQueue;
    info.damageGenerationId = mDamageId++;
    info.out.skippedFrameReason = std::nullopt;

    mAnimationContext->startFrame(info.mode);
    // 遍历渲染节点,将渲染节点的数据同步到TreeInfo对象中,最终会调用到RenderNode::syncDisplayList方法将之前绘制过程中得到的mStagingDisplayList赋值给mDisplayList,完成数据的同步
    for (const sp<RenderNode>& node : mRenderNodes) {
        info.mode = (node.get() == target ? TreeInfo::MODE_FULL : TreeInfo::MODE_RT_ONLY);
        node->prepareTree(info);
        GL_CHECKPOINT(MODERATE);
    }
    // ...
    mIsDirty = true;

    if (CC_UNLIKELY(!hasOutputTarget())) {
        info.out.skippedFrameReason = SkippedFrameReason::NoOutputTarget;
        mCurrentFrameInfo->setSkippedFrameReason(*info.out.skippedFrameReason);
        return;
    }
	// ...

    bool postedFrameCallback = false;
    if (info.out.hasAnimations || info.out.skippedFrameReason) {
        if (CC_UNLIKELY(!Properties::enableRTAnimations)) {
            info.out.requiresUiRedraw = true;
        }
        if (!info.out.requiresUiRedraw) {
            // If animationsNeedsRedraw is set don't bother posting for an RT anim
            // as we will just end up fighting the UI thread.
            // 提交一个任务到下一帧
            mRenderThread.postFrameCallback(this);
            postedFrameCallback = true;
        }
    }
	// ...
    if (!postedFrameCallback && info.out.animatedImageDelay != TreeInfo::Out::kNoAnimatedImageDelay) {
        // Subtract the time of one frame so it can be displayed on time.
        const nsecs_t kFrameTime = mRenderThread.timeLord().frameIntervalNanos();
        if (info.out.animatedImageDelay <= kFrameTime) {
            mRenderThread.postFrameCallback(this);
        } else {
            const auto delay = info.out.animatedImageDelay - kFrameTime;
            int genId = mGenerationID;
            mRenderThread.queue().postDelayed(delay, [this, genId]() {
                if (mGenerationID == genId) {
                    mRenderThread.postFrameCallback(this);
                }
            });
        }
    }
}

// Called by choreographer to do an RT-driven animation
void CanvasContext::doFrame() {
    if (!mRenderPipeline->isSurfaceReady()) return;
    mIdleDuration = systemTime(SYSTEM_TIME_MONOTONIC) -mRenderThread.timeLord().computeFrameTimeNanos();
    prepareAndDraw(nullptr);
}

void CanvasContext::prepareAndDraw(RenderNode* node) {
    int64_t vsyncId = mRenderThread.timeLord().lastVsyncId();
    nsecs_t vsync = mRenderThread.timeLord().computeFrameTimeNanos();
    int64_t frameDeadline = mRenderThread.timeLord().lastFrameDeadline();
    int64_t frameInterval = mRenderThread.timeLord().frameIntervalNanos();
    int64_t frameInfo[UI_THREAD_FRAME_INFO_SIZE];
    UiFrameInfoBuilder(frameInfo).addFlag(FrameInfoFlags::RTAnimation).setVsync(vsync, vsync, vsyncId, frameDeadline, frameInterval);

    TreeInfo info(TreeInfo::MODE_RT_ONLY, *this);
    prepareTree(info, frameInfo, systemTime(SYSTEM_TIME_MONOTONIC), node);
    if (!info.out.skippedFrameReason) {
        draw(info.out.solelyTextureViewUpdates);
    } else {
        // wait on fences so tasks don't overlap next frame
        waitOnFences();
    }
}

void CanvasContext::draw(bool solelyTextureViewUpdates) {
	// ...
    IRenderPipeline::DrawResult drawResult;
    {
    	// 渲染管线处理绘制数据
        drawResult = mRenderPipeline->draw(frame, windowDirty, dirty, mLightGeometry, &mLayerUpdateQueue, mContentDrawBounds, mOpaque, mLightInfo, mRenderNodes, &(profiler()), mBufferParams, profilerLock());
    }
	// ...
	bool requireSwap = false;
    bool didDraw = false;

    int error = OK;
    // 渲染管线将处理之后的数据交换到buffer中
    bool didSwap = mRenderPipeline->swapBuffers(frame, drawResult, windowDirty, mCurrentFrameInfo, &requireSwap);
    // ...
}

RenderNode::syncDisplayList方法可以看出,同步数据其实就是将之前绘制阶段得到的mStagingDisplayList数据赋值给mDisplayList,之后就可以使用mDisplayList进行渲染处理了。

cpp 复制代码
// frameworks/base/libs/hwui/RenderNode.cpp
void RenderNode::syncDisplayList(TreeObserver& observer, TreeInfo* info) {
    // Make sure we inc first so that we don't fluctuate between 0 and 1,
    // which would thrash the layer cache
    if (mStagingDisplayList) {
        mStagingDisplayList.updateChildren([](RenderNode* child) { child->incParentRefCount(); });
    }
    deleteDisplayList(observer, info);
    mDisplayList = std::move(mStagingDisplayList);
    if (mDisplayList) {
        WebViewSyncData syncData{.applyForceDark = shouldEnableForceDark(info)};
        mDisplayList.syncContents(syncData);
        handleForceDark(info);
    }
}

从上面的源码分析可以看出,当绘制完成之后绘制命令被记录到mStagingDisplayList,当同步数据时,在主线程同步阻塞完成mStagingDisplayListmDisplayList的转移之后,向渲染线程RenderThread提交渲染任务,主要是由渲染管线处理绘制命令,并通知SurfaceFlinger进程对绘制命令处理之后的数据进行合成处理。

总结

整体上硬件绘制流程相比软件绘制流程来说,有以下不同点:

  1. 硬件绘制在主线程主要负责记录绘制命令并同步绘制命令,而软件绘制则是在主线程完成绘制命令的处理,生成最终的数据;
  2. 硬件绘制通过RenderNode记录当前View是否需要重新构建DisplayList来减少不必要的绘制处理,而软件绘制则会对所有View重新进行绘制;
  3. 硬件绘制引入渲染线程降低主线程的压力,软件绘制则是在主线程完成所有的绘制命令处理;

下面给出了梳理后的硬件绘制的主要结构图,用于帮助整体上理解和掌握硬件绘制流程。

相关推荐
m0_748256146 分钟前
Android 关于Tencent vConsole 添加入webView 总结
android
开发者阿伟33 分钟前
Android Jetpack DataBinding源码解析与实践
android·android jetpack
AirDroid_qs1 小时前
Niushop开源商城(漏洞复现)
android·网络安全·开源
Autumn.h1 小时前
niushop开源商城靶场漏洞
android
戏谑2 小时前
Android 常用布局
android·view
拭心14 小时前
Google 提供的 Android 端上大模型组件:MediaPipe LLM 介绍
android
带电的小王16 小时前
WhisperKit: Android 端测试 Whisper -- Android手机(Qualcomm GPU)部署音频大模型
android·智能手机·whisper·qualcomm
梦想平凡16 小时前
PHP 微信棋牌开发全解析:高级教程
android·数据库·oracle
元争栈道17 小时前
webview和H5来实现的android短视频(短剧)音视频播放依赖控件
android·音视频
阿甘知识库17 小时前
宝塔面板跨服务器数据同步教程:双机备份零停机
android·运维·服务器·备份·同步·宝塔面板·建站