VSYNC信号是通过什么方式传递?
前两天在一个面试中遇到这个问题,其实自己当时是没有看过这块具体的源代码的,自己就回答了一个binder
,后面翻了一下源代码果然错的离谱,今天就补上这一课。
Choreographer
我们知道上层当有绘制的需求时,会去申请vsync
信号,信号的申请是从Choreographer
开始,就从这里看起。
Java
private void scheduleVsyncLocked() {
try {
Trace.traceBegin(Trace.TRACE_TAG_VIEW, "Choreographer#scheduleVsyncLocked");
mDisplayEventReceiver.scheduleVsync();
} finally {
Trace.traceEnd(Trace.TRACE_TAG_VIEW);
}
}
这里没有什么好讲的,从ViewRootImpi
中开始有绘制的动作申请信号兜兜转转就走到了这里。mDisplayEventReceiver
是FrameDisplayEventReceiver
类。
Java
/**
* Schedules a single vertical sync pulse to be delivered when the next
* display frame begins.
*/
@UnsupportedAppUsage
public void scheduleVsync() {
if (mReceiverPtr == 0) {
Log.w(TAG, "Attempted to schedule a vertical sync pulse but the display event "
+ "receiver has already been disposed.");
} else {
nativeScheduleVsync(mReceiverPtr);
}
}
nativeScheduleVsync
是一个Native
函数,
C++
static void nativeScheduleVsync(JNIEnv* env, jclass clazz, jlong receiverPtr) {
sp<NativeDisplayEventReceiver> receiver =
reinterpret_cast<NativeDisplayEventReceiver*>(receiverPtr);
//这里,我们首先看下NativeDisplayEventReceiver是做什么的
status_t status = receiver->scheduleVsync();
if (status) {
String8 message;
message.appendFormat("Failed to schedule next vertical sync pulse. status=%d", status);
jniThrowRuntimeException(env, message.string());
}
}
DisplayEventDispatcher
C++
class NativeDisplayEventReceiver : public DisplayEventDispatcher {
public:
NativeDisplayEventReceiver(JNIEnv* env, jobject receiverWeak,
const sp<MessageQueue>& messageQueue, jint vsyncSource,
jint eventRegistration);
void dispose();
protected:
virtual ~NativeDisplayEventReceiver();
private:
jobject mReceiverWeakGlobal;
sp<MessageQueue> mMessageQueue;
void dispatchVsync(nsecs_t timestamp, PhysicalDisplayId displayId, uint32_t count,
VsyncEventData vsyncEventData) override;
void dispatchHotplug(nsecs_t timestamp, PhysicalDisplayId displayId, bool connected) override;
void dispatchModeChanged(nsecs_t timestamp, PhysicalDisplayId displayId, int32_t modeId,
nsecs_t vsyncPeriod) override;
void dispatchFrameRateOverrides(nsecs_t timestamp, PhysicalDisplayId displayId,
std::vector<FrameRateOverride> overrides) override;
void dispatchNullEvent(nsecs_t timestamp, PhysicalDisplayId displayId) override {}
};
scheduleVsync
在NativeDisplayEventReceiver
中没有定义,我们继续去跟踪它的父类DisplayEventDispatcher
。
C++
status_t DisplayEventDispatcher::scheduleVsync() {
if (!mWaitingForVsync) {
ALOGV("dispatcher %p ~ Scheduling vsync.", this);
// Drain all pending events.
nsecs_t vsyncTimestamp;
PhysicalDisplayId vsyncDisplayId;
uint32_t vsyncCount;
VsyncEventData vsyncEventData;
if (processPendingEvents(&vsyncTimestamp, &vsyncDisplayId, &vsyncCount, &vsyncEventData)) {
ALOGE("dispatcher %p ~ last event processed while scheduling was for %" PRId64 "", this,
ns2ms(static_cast<nsecs_t>(vsyncTimestamp)));
}
//请求信号
status_t status = mReceiver.requestNextVsync();
if (status) {
ALOGW("Failed to request next vsync, status=%d", status);
return status;
}
mWaitingForVsync = true;
mLastScheduleVsyncTime = systemTime(SYSTEM_TIME_MONOTONIC);
}
return OK;
}
mReceiver
是在头文件中定义的,它对应的类是DisplayEventReceiver
。
C++
class DisplayEventDispatcher : public LooperCallback {
public:
explicit DisplayEventDispatcher(
const sp<Looper>& looper,
ISurfaceComposer::VsyncSource vsyncSource = ISurfaceComposer::eVsyncSourceApp,
ISurfaceComposer::EventRegistrationFlags eventRegistration = {});
...
private:
sp<Looper> mLooper;
DisplayEventReceiver mReceiver;
bool mWaitingForVsync;
uint32_t mLastVsyncCount;
nsecs_t mLastScheduleVsyncTime;
...
}
createDisplayEventConnection
函数在SurfaceFlinger
中实现,BitTube
是一个封装了Socket
的API
。
C++
DisplayEventReceiver::DisplayEventReceiver(
ISurfaceComposer::VsyncSource vsyncSource,
ISurfaceComposer::EventRegistrationFlags eventRegistration) {
sp<ISurfaceComposer> sf(ComposerService::getComposerService());
if (sf != nullptr) {
mEventConnection = sf->createDisplayEventConnection(vsyncSource, eventRegistration);
if (mEventConnection != nullptr) {
mDataChannel = std::make_unique<gui::BitTube>();
const auto status = mEventConnection->stealReceiveChannel(mDataChannel.get());
if (!status.isOk()) {
ALOGE("stealReceiveChannel failed: %s", status.toString8().c_str());
mInitError = std::make_optional<status_t>(status.transactionError());
mDataChannel.reset();
mEventConnection.clear();
}
}
}
}
status_t DisplayEventReceiver::requestNextVsync() {
if (mEventConnection != nullptr) {
mEventConnection->requestNextVsync();
return NO_ERROR;
}
return mInitError.has_value() ? mInitError.value() : NO_INIT;
}
C++
class DisplayEventReceiver {
private:
//事件定义
sp<IDisplayEventConnection> mEventConnection;
//通道,这个就是对应的通信方式
std::unique_ptr<gui::BitTube> mDataChannel;
std::optional<status_t> mInitError;
};
}
答案
代码追踪到了这里,答案就已经很明显了,最终采用过的是Socket
方式进行通信。而不是Android
中更常见的Binder
。
为什么?
为什么是socket
不是binder
呢,想了一下 inputEvent
的传递也是通过Socket
。
那这里我们就总结一下这两者的共性
- 1:都是毫秒为单位的高频通信场景。
- 2:都有稳定长久的通信诉求,几乎伴随了整个设备的开机周期。
- 3:有双向通信的需求。
如果高频率的通信也使用Binder
的话,Binder
的资源在Android
中是有限的。直接就排除了使用Binder
的可能性。
以上的所有使用的共性都会因为Binder
资源的有限性导致Socket
在这其中更为合适。
给我们的思考,在以后得开发过程中,假如我们有频繁,双通道或者稳定信息量较大的通信需求时,应该优先的考虑Socket
。