一、前言
在Binder - 4、获取Service的过程中,我们通过驱动层拿到了SurfaceFlinger
对应的handle 0
,然后用其创建了BpBinder
,并在随后将其转换成为了BpSurfaceComposer
。
在本文中,我们以BpSurfaceComposer
的一次调用为例,看下Binder调用是怎么从Client
端调用到Server
端的。
二、源码分析
我们在前文中,是以SurfaceComposerClient
为切入点,这次我们依然从这里开始,通过connectLocked
拿到BpBinder
之后,将其存储在了mComposerService
对象中,我们找一个方法开始分析,这里以createDisplay
为例:
frameworks\native\libs\gui\SurfaceComposerClient.cpp
c++
sp<IBinder> SurfaceComposerClient::createDisplay(const String8& displayName, bool secure) {
return ComposerService::getComposerService()->createDisplay(displayName,
secure);
}
其中,getComposerService
得到的就是mComposerService
,我们直接进入其实现中
2.1 Client进程-createDisplay
调用开始
frameworks\native\libs\gui\ISurfaceComposer.cpp
c++
sp<IBinder> createDisplay(const String8& displayName, bool secure) override {
Parcel data, reply;
data.writeInterfaceToken(ISurfaceComposer::getInterfaceDescriptor());
status_t status = data.writeString8(displayName);
if (status) {
return nullptr;
}
//...
status = remote()->transact(BnSurfaceComposer::CREATE_DISPLAY, data, &reply);
if (status) {
return nullptr;
}
sp<IBinder> display;
status = reply.readNullableStrongBinder(&display);
if (status) {
return nullptr;
}
return display;
}
可以看到直接调用了remote()->transact
,我们已经知道remote()
指代的就是BpBinder
,接下来就进入了BpBinder
的transact()
方法,接下来大部分的过程与我们在Binder - 3、注册Service的过程一节中一样,只是handle
发生了变化,我们重点看下发生变化的部分:
IPCThreadState::transact
没有变化
->
IPCThreadState::writeTransactionData
->
IPCThreadState::waitForResponse
->
IPCThreadState::talkWithDriver
进入驱动层
->
binder_thread_write
->
binder_transaction
2.2 Client进程-binder_transaction
BC_TRANSACTION
c++
static void binder_transaction(struct binder_proc *proc,
struct binder_thread *thread,
struct binder_transaction_data *tr, int reply,
binder_size_t extra_buffers_size)
{
if (reply) {
//...
} else {
//此时我们作为Client,handle不为0
if (tr->target.handle) {
struct binder_ref *ref;
//获取目标进程中handle对应的binder_ref对象
ref = binder_get_ref_olocked(proc, tr->target.handle,
true);
if (ref) {
//增加binder_node的强引用,并将其返回
target_node = binder_get_node_refs_for_txn(
ref->node, &target_proc,
&return_error);
}
} else {
//ServiceManager
}
}
//...
//获取offset
off_start_offset = ALIGN(tr->data_size, sizeof(void *));
buffer_offset = off_start_offset;
off_end_offset = off_start_offset + tr->offsets_size;
sg_buf_offset = ALIGN(off_end_offset, sizeof(void *));
sg_buf_end_offset = sg_buf_offset + extra_buffers_size -
ALIGN(secctx_sz, sizeof(u64));
off_min = 0;
//根据offset来切分data,在这个例子中,由于只写入了两个基础数据,所以没有object,也就是offsets_size=0,不需要进入循环
for (buffer_offset = off_start_offset; buffer_offset < off_end_offset;
buffer_offset += sizeof(binder_size_t)) {
}
//唤醒对端
return;
}
一些重复的方法我们再次略去,只关注变化的部分。
此时,我们目标是对端的Binder对象,所以不再是ServiceManager
了。
2.3 Server进程-BR_TRANSACTION
在Binder驱动将Server进程唤醒之后,Server进程的工作线程被唤醒开始工作,并进入BR_TRANSACTION相关的case中:
这部分我们在Binder - 3、注册Service的过程里面已经讲过,但是发生了一些变化
c++
status_t IPCThreadState::executeCommand(int32_t cmd)
{
BBinder* obj;
RefBase::weakref_type* refs;
status_t result = NO_ERROR;
switch ((uint32_t)cmd) {
case BR_TRANSACTION_SEC_CTX:
case BR_TRANSACTION:
{
binder_transaction_data_secctx tr_secctx;
binder_transaction_data& tr = tr_secctx.transaction_data;
//读取binder_transaction_data_secctx数据
if (cmd == (int) BR_TRANSACTION_SEC_CTX) {
result = mIn.read(&tr_secctx, sizeof(tr_secctx));
} else {
result = mIn.read(&tr, sizeof(tr));
tr_secctx.secctx = 0;
}
Parcel buffer;
//转换为Parcel对象
buffer.ipcSetDataReference(
reinterpret_cast<const uint8_t*>(tr.data.ptr.buffer),
tr.data_size,
reinterpret_cast<const binder_size_t*>(tr.data.ptr.offsets),
tr.offsets_size/sizeof(binder_size_t), freeBuffer);
//...
Parcel reply;
status_t error;
//这次我们的目标不为0了
if (tr.target.ptr) {
//试图拿到目标target对应的BBinder对象,然后执行transact
if (reinterpret_cast<RefBase::weakref_type*>(
tr.target.ptr)->attemptIncStrong(this)) {
error = reinterpret_cast<BBinder*>(tr.cookie)->transact(tr.code, buffer,
&reply, tr.flags);
//
reinterpret_cast<BBinder*>(tr.cookie)->decStrong(this);
} else {
error = UNKNOWN_TRANSACTION;
}
} else {
//之前我们是`servermanager`,但是这次不是
error = the_context_object->transact(tr.code, buffer, &reply, tr.flags);
}
if ((tr.flags & TF_ONE_WAY) == 0) {
LOG_ONEWAY("Sending reply to %d!", mCallingPid);
if (error < NO_ERROR) reply.setError(error);
constexpr uint32_t kForwardReplyFlags = TF_CLEAR_BUF;
//告诉对端,Binder通信完成
sendReply(reply, (tr.flags & kForwardReplyFlags));
} else {
//...
}
//...
}
break;
}
return result;
}
这次我们的目标已经发生了变化,不再是ServiceManager
,所以会调用到相应Binder的transact
方法去,这里的Binder
对象也是我们在注册Service的时候注册进去的。
2.4 server-进程-reinterpret_cast
在拿到cookie之后,我们看到调用了一次reinterpret_cast,这个reinterpret_cast实际上是C++的强制类型转换,只是非常灵活,这里的cookie其实是我们之前传给Binder驱动的BBinder对象的地址,经过reinterpret_cast一转换,我们就可以直接拿到BBinder对象,也就是Service对象了。
2.5 Server进程-transact
c++
status_t BBinder::transact(
uint32_t code, const Parcel& data, Parcel* reply, uint32_t flags)
{
data.setDataPosition(0);
status_t err = NO_ERROR;
switch (code) {
//...
default:
err = onTransact(code, data, reply, flags);
break;
}
//...
return err;
}
默认情况下,会直接调用到onTransact
函数,这个函数是被各个Service自己来实现的,在这里,我们的Service也就是BBinder,类型是BnSurfaceComposer,真正的实现是SurfaceFlinger。
我们看下BnSurfaceComposer真正的实现:
2.6 Server进程-onTransact
c++
status_t BnSurfaceComposer::onTransact(
uint32_t code, const Parcel& data, Parcel* reply, uint32_t flags)
{
switch(code) {
case CREATE_DISPLAY: {
CHECK_INTERFACE(ISurfaceComposer, data, reply);
String8 displayName;
SAFE_PARCEL(data.readString8, &displayName);
bool secure = false;
SAFE_PARCEL(data.readBool, &secure);
//真正调用实现
sp<IBinder> display = createDisplay(displayName, secure);
//写到reply里面去
SAFE_PARCEL(reply->writeStrongBinder, display);
return NO_ERROR;
}
}
}
在BnSurfaceComposer的onTransact方法中,真正调用了createDisplay方法,而这个createDisplay方法由SurfaceFlinger来实现,最终将返回值写入reply中(这里的返回值刚好是Binder对象)。
如果仔细看的话,会发现BpSurfaceManager的调用和BnSurfaceComposer的被调用是在一个文件里面的,只是一个是Client一个是Server,虽然在一个文件里面,但是这二者调用却经历了太多的过程,远没有表面那么和谐。
2.7 Server进程-sendReply
回到刚才的executeCommand方法中,现在我们调用完成,而且reply中已经有了数据,可以说是"满载而归",是时候将数据返回回去了,也即sendReply:
c++
status_t IPCThreadState::sendReply(const Parcel& reply, uint32_t flags)
{
status_t err;
status_t statusBuffer;
err = writeTransactionData(BC_REPLY, flags, -1, 0, reply, &statusBuffer);
if (err < NO_ERROR) return err;
return waitForResponse(nullptr, nullptr);
}
非常的简单,首先是写入transaction数据,然后通过waitForResponse与binder驱动进行通信,但是此时waitForResponse传了两个空参数,也就是说,等数据发完,这边的调用就完毕了。
话不多说,我们直接进入驱动层。
2.8 driver-binder_thread_write
BC_REPLY
重新进入驱动,还是这几个方法,但是这次code不一样,code现在是BC_REPLY:
c++
static int binder_thread_write(struct binder_proc *proc,
struct binder_thread *thread,
binder_uintptr_t binder_buffer, size_t size,
binder_size_t *consumed)
{
uint32_t cmd;
struct binder_context *context = proc->context;
void __user *buffer = (void __user *)(uintptr_t)binder_buffer;
void __user *ptr = buffer + *consumed;
void __user *end = buffer + size;
while (ptr < end && thread->return_error.cmd == BR_OK) {
int ret;
if (get_user(cmd, (uint32_t __user *)ptr))
return -EFAULT;
ptr += sizeof(uint32_t);
switch (cmd) {
//再回头看,BC_TRANSACTION和BC_REPLY是并列的,说明这两个数据流向是一样的。
case BC_TRANSACTION:
case BC_REPLY: {
struct binder_transaction_data tr;
//从用户进程读取binder_transaction_data
if (copy_from_user(&tr, ptr, sizeof(tr)))
return -EFAULT;
ptr += sizeof(tr);
binder_transaction(proc, thread, &tr,
cmd == BC_REPLY, 0);
break;
}
*consumed = ptr - buffer;
}
}
return 0;
}
还是走的以前一样的流程,进入binder_transaction,但是此时reply变成了true:
c++
static void binder_transaction(struct binder_proc *proc,
struct binder_thread *thread,
struct binder_transaction_data *tr, int reply,
binder_size_t extra_buffers_size)
{
if (reply) {
in_reply_to = thread->transaction_stack;
//检查线程
if (in_reply_to->to_thread != thread) {
//error
}
thread->transaction_stack = in_reply_to->to_parent;
//获取目标线程
target_thread = binder_get_txn_from_and_acq_inner(in_reply_to);
if (target_thread == NULL) {
//error
}
if (target_thread->transaction_stack != in_reply_to) {
//error
}
target_proc = target_thread->proc;
target_proc->tmp_ref++;
} else {
//...
}
//获取offset
off_start_offset = ALIGN(tr->data_size, sizeof(void *));
buffer_offset = off_start_offset;
off_end_offset = off_start_offset + tr->offsets_size;
sg_buf_offset = ALIGN(off_end_offset, sizeof(void *));
sg_buf_end_offset = sg_buf_offset + extra_buffers_size -
ALIGN(secctx_sz, sizeof(u64));
off_min = 0;
//根据offset来切分data,在这里,我们已经写入了Binder对象,所以会执行一次复制,跟以前一样
for (buffer_offset = off_start_offset; buffer_offset < off_end_offset;
buffer_offset += sizeof(binder_size_t)) {
}
//唤醒对端
return;
}
这里获取到了对端,也就是Client端的执行线程,我们接着进入Client端,此时Client端进入读取数据方法,并执行BINDER_WORK_TRANSACTION:
2.9 Client进程-handlePolledCommands
在Client进程收到epoll事件之后,照样与Server进程一样,进行
handlePolledCommands
->
getAndExecuteCommand
->
talkWithDriver
->
executeCommand
但是此次进入binder数据读取
2.10 内核态-binder_thread_read
BINDER_WORK_TRANSACTION
c++
static int binder_thread_read(struct binder_proc *proc,
struct binder_thread *thread,
binder_uintptr_t binder_buffer, size_t size,
binder_size_t *consumed, int non_block)
{
void __user *buffer = (void __user *)(uintptr_t)binder_buffer;
void __user *ptr = buffer + *consumed;
void __user *end = buffer + size;
while (1) {
uint32_t cmd;
struct binder_transaction_data_secctx tr;
struct binder_transaction_data *trd = &tr.transaction_data;
struct binder_work *w = NULL;
struct list_head *list = NULL;
struct binder_transaction *t = NULL;
struct binder_thread *t_from;
size_t trsize = sizeof(*trd);
//任务出队
w = binder_dequeue_work_head_ilocked(list);
switch (w->type) {
case BINDER_WORK_TRANSACTION: {
//根据binder_work获取binder_transaction
t = container_of(w, struct binder_transaction, work);
} break;
}
//此次target_node为0,cmd变为BR_REPLY
if (t->buffer->target_node) {
//...
} else {
trd->target.ptr = 0;
trd->cookie = 0;
cmd = BR_REPLY;
}
trd->code = t->code;
trd->flags = t->flags;
//写数据
//将cmd写入用户空间,此时cmd为BR_REPLY
if (put_user(cmd, (uint32_t __user *)ptr)) {
//...
}
ptr += sizeof(uint32_t);
//将tr写入用户空间,tr是binder_transaction_data的一层包装
//binder_transaction_data中包含了用户空间buffer的地址
if (copy_to_user(ptr, &tr, trsize)) {
//...
}
}
}
这次cmd变成了BR_REPLY,我们再进入Client进程:
2.11 Client进程-BR_REPLY
c++
status_t IPCThreadState::waitForResponse(Parcel *reply, status_t *acquireResult)
{
uint32_t cmd;
int32_t err;
while (1) {
if ((err=talkWithDriver()) < NO_ERROR) break;
err = mIn.errorCheck();
if (err < NO_ERROR) break;
if (mIn.dataAvail() == 0) continue;
cmd = (uint32_t)mIn.readInt32();
switch (cmd) {
case BR_REPLY:
{
binder_transaction_data tr;
//读取数据
err = mIn.read(&tr, sizeof(tr));
if (reply) {
if ((tr.flags & TF_STATUS_CODE) == 0) {
//设置到reply里面
reply->ipcSetDataReference(
reinterpret_cast<const uint8_t*>(tr.data.ptr.buffer),
tr.data_size,
reinterpret_cast<const binder_size_t*>(tr.data.ptr.offsets),
tr.offsets_size/sizeof(binder_size_t),
freeBuffer);
} else {
//error
}
} else {
//...
}
}
goto finish;
}
}
return err;
}
2.12 结束
自此,createDisplay
进入结束阶段,
c++
sp<IBinder> createDisplay(const String8& displayName, bool secure) override {
Parcel data, reply;
data.writeInterfaceToken(ISurfaceComposer::getInterfaceDescriptor());
status_t status = data.writeString8(displayName);
if (status) {
return nullptr;
}
//...
status = remote()->transact(BnSurfaceComposer::CREATE_DISPLAY, data, &reply);
if (status) {
return nullptr;
}
sp<IBinder> display;
//读取reply
status = reply.readNullableStrongBinder(&display);
if (status) {
return nullptr;
}
return display;
}
将返回的对象从reply中读取出来,并将其变为IBinder对象,然后返回给调用者,createDisplay
结束。
图2.1 - 一次binder过程
三、总结
一次Binder调用看起来简单,但是实际上却极为复杂,而且中间牵涉了太多的细节。
我们在这里只是将主流程做了一下分析,在主流程明了之后,我们也就知道Binder到底做了什么样的事情,也不至于对其产生畏惧心理,对其内部更多细节的探究也可以从主干去展开。