1 对比Java层和native层调用方式
Java调用方式:
前文讲过java层通过getIServiceManager获取到ServiceManager代理类ServiceManagerProxy,我们直接看它的addService接口:
JAVA
public void addService(String name, IBinder service, boolean allowIsolated)
throws RemoteException {
Parcel data = Parcel.obtain();
Parcel reply = Parcel.obtain();
data.writeInterfaceToken(IServiceManager.descriptor);
data.writeString(name);
data.writeStrongBinder(service);
data.writeInt(allowIsolated ? 1 : 0);
mRemote.transact(ADD_SERVICE_TRANSACTION, data, reply, 0);
reply.recycle();
data.recycle();
}
- 这里的data是Parcel,最终通过JNI做了一些数据的封装
- mRemote是BinderProxy,mRemote.transact实际最终由BpBinder.transact实现
native调用方式:
Media的main函数:
c++
int main(int argc __unused, char **argv __unused)
{
...
sp<ProcessState> proc(ProcessState::self());
sp<IServiceManager> sm(defaultServiceManager());
...
MediaPlayerService::instantiate();
ResourceManagerService::instantiate();
...
// binder线程池后面文章讲解
ProcessState::self()->startThreadPool();
IPCThreadState::self()->joinThreadPool();
}
instantiate:
c++
void MediaPlayerService::instantiate() {
defaultServiceManager()->addService(
String16("media.player"), new MediaPlayerService());
}
defaultServiceManager()获取到的BpServiceManager的实例,调用addService,参数包括服务名字符串和服务MediaPlayerService对象。
这个服务名称 "media.player",可以通过service list命令根据服务名查找服务:
ini
$ service list | grep media.player
149 media.player: [android.media.IMediaPlayerService]
BpServiceManager::addService()
c++
class BpServiceManager : public BpInterface<IServiceManager>
{
public:
...
virtual status_t addService(const String16& name, const sp<IBinder>& service,
bool allowIsolated)
{
Parcel data, reply;
// int32的整形数+字符串(字符串是"android.os.IServiceManager")
data.writeInterfaceToken(IServiceManager::getInterfaceDescriptor());
// 服务的名称,即"media.player"
data.writeString16(name);
// 将MediaPlayerService封装到flat_binder_object结构体中
data.writeStrongBinder(service);
// allowIsolated
data.writeInt32(allowIsolated ? 1 : 0);
status_t err = remote()->transact(ADD_SERVICE_TRANSACTION, data, &reply);
return err == NO_ERROR ? reply.readExceptionCode() : err;
}
...
}
- 组织两个Parcel,data用来承载传输数据,reply用来接收对端反馈的数据
- 封装数据到 data
- 调用remote()返回的BpBinder的transact(),发起请求
对比来看,从java层发起请求和从native层发起请求的调用方式,最终都是封装数据到Parcel,然后调用BpBinder.transact发起通信,只是Java层的调用要借助JNI调用到native层
本系列就都以MediaPlayerService为例讲解。
2 addService 发起请求
接上文 BpServiceManager::addService() 继续
2.1 封装数据
2.1.1 Parcel::writeInterfaceToken
data.writeInterfaceToken(IServiceManager::getInterfaceDescriptor()) 中 getInterfaceDescriptor返回"android.os.IServiceManager"
即:data.writeInterfaceToken("android.os.IServiceManager")
c++
status_t Parcel::writeInterfaceToken(const String16& interface)
{
// int32的整形数
writeInt32(IPCThreadState::self()->getStrictModePolicy() |
STRICT_MODE_PENALTY_GATHER);
// 字符串("android.os.IServiceManager")
return writeString16(interface);
}
- IPCThreadState::getStrictModePolicy(),返回的是mStrictModePolicy,其初始值是0。writeInt32的调用可以简化为 writeInt32(STRICT_MODE_PENALTY_GATHER)。
- writeString16(interface)即writeString16("android.os.IServiceManager")。
这里写的这两个数据的作用:在ServiceManager中收到数据后,需要根据数据头来判断数据的有效性。这两个数据就是数据头,用作有效性判断的。
2.1.2 data.writeString16(name)
data.writeString16(name)将MediaPlayerService服务的名称写入到data中,入参name="media.player",写入parcel。
2.1.3 data.writeStrongBinder(service)
入参 val 是 MediaPlayerService对象
c++
status_t Parcel::writeStrongBinder(const sp<IBinder>& val)
{
return flatten_binder(ProcessState::self(), val, this);
}
-> Parcel::flatten_binder()
本函数是将MediaPlayerService对象封装到结构体flat_binder_object中。
c++
status_t flatten_binder(const sp<ProcessState>& /*proc*/,
const sp<IBinder>& binder, Parcel* out)
{
flat_binder_object obj;
obj.flags = 0x7f | FLAT_BINDER_FLAG_ACCEPTS_FDS;
// binder 非空
if (binder != NULL) {
// localBinder 返回的 BBinder,local 非空
IBinder *local = binder->localBinder();
if (!local) {
BpBinder *proxy = binder->remoteBinder();
if (proxy == NULL) {
ALOGE("null proxy");
}
const int32_t handle = proxy ? proxy->handle() : 0;
obj.type = BINDER_TYPE_HANDLE;
obj.binder = 0; /* Don't pass uninitialized stack data to a remote process */
obj.handle = handle;
obj.cookie = 0;
} else { // 非空走这里
obj.type = BINDER_TYPE_BINDER;
obj.binder = reinterpret_cast<uintptr_t>(local->getWeakRefs());
obj.cookie = reinterpret_cast<uintptr_t>(local);
}
} else {
obj.type = BINDER_TYPE_BINDER;
obj.binder = 0;
obj.cookie = 0;
}
return finish_flatten_binder(binder, obj, out);
}
入参 :proc 是ProcessState对象,binder 是MediaPlayerService对象,out是Parcel自己。
binder->localBinder() :如果是BBinder对象返回this,如果是IBinder对象而非BBinder返回null。(MediaPlayerService的父类是BBinder,localBinder()函数在frameworks/native/libs/binder/Binder.cpp中实现)。因此,local不为NULL。
全部赋值obj的结果如下:
c++
obj.flags = 0x7f | FLAT_BINDER_FLAG_ACCEPTS_FDS; //标记
obj.type = BINDER_TYPE_BINDER; //类型
obj.binder = reinterpret_cast<uintptr_t>(local->getWeakRefs()); //MediaPlayerService的弱引用
obj.cookie = reinterpret_cast<uintptr_t>(local); // MediaPlayerService自身
最后 finish_flatten_binder()将数据写入到Parcel中。
--> Parcel::finish_flatten_binder()
c++
inline static status_t finish_flatten_binder(
const sp<IBinder>& /*binder*/, const flat_binder_object& flat, Parcel* out)
{
return out->writeObject(flat, false);
}
---> Parcel::writeObject()
c++
status_t Parcel::writeObject(const flat_binder_object& val, bool nullMetaData)
{
// 1 enoughData == false
const bool enoughData = (mDataPos+sizeof(val)) <= mDataCapacity;
// 1 初始值 mObjectsSize = mObjectsCapacity = 0,enoughObjects == false
const bool enoughObjects = mObjectsSize < mObjectsCapacity;
// -------写入-------
if (enoughData && enoughObjects) {
restart_write: // 4
// 对象保存下来
*reinterpret_cast<flat_binder_object*>(mData+mDataPos) = val;
...
// val.binder非空
if (nullMetaData || val.binder != 0) {
//将地址偏移保存到mObjects[0]中
mObjects[mObjectsSize] = mDataPos;
acquire_object(ProcessState::self(), val, this, &mOpenAshmemSize);
// 增加mObjectsSize的值
mObjectsSize++;
}
// 结束写
return finishWrite(sizeof(flat_binder_object));
}
// ------------------
// -------扩容-------
if (!enoughData) {
//2 扩容
const status_t err = growData(sizeof(val));
if (err != NO_ERROR) return err;
}
if (!enoughObjects) {
//3 扩容
size_t newSize = ((mObjectsSize+2)*3)/2;
//分配内存
if (newSize*sizeof(binder_size_t) < mObjectsSize) return NO_MEMORY;
binder_size_t* objects = (binder_size_t*)realloc(mObjects,
newSize*sizeof(binder_size_t));
if (objects == NULL) return NO_MEMORY;
//设置重新分配给mObjects的内存地址
mObjects = objects;
//设置mObjects对象的容量
mObjectsCapacity = newSize;
}
// ------------------
goto restart_write;
}
注释4处为实际写数据处
----> Parcel::finishWrite()
结束写,更新坐标和数据大小
c++
status_t Parcel::finishWrite(size_t len)
{
if (len > INT32_MAX) {
return BAD_VALUE;
}
// len是int32_t的大小,len=4
mDataPos += len;
// mDataPos=4,mDataSize=0
if (mDataPos > mDataSize) {
// mDataSize = mDataPos = 4
mDataSize = mDataPos;
...
}
return NO_ERROR;
}
2.1.4 data.writeInt32(allowIsolated ? 1 : 0)
调用data.writeInt32(allowIsolated ? 1 : 0)。allowIsolated为false,即 data.writeInt32(0),写入parcel。
2.2 启动传输
2.2.1 BpBinder::transact()
c++
status_t BpBinder::transact(
uint32_t code, const Parcel& data, Parcel* reply, uint32_t flags)
{
// code: ADD_SERVICE_TRANSACTION
// 初始值为1,在BpBinder构造的时候被赋值
if (mAlive) {
status_t status = IPCThreadState::self()->transact(
mHandle, code, data, reply, flags);
if (status == DEAD_OBJECT) mAlive = 0;
return status;
}
return DEAD_OBJECT;
}
mAlive的初始值为1,会调用IPCThreadState::self()->transact()。
2.2.2 IPCThreadState::transact():
BpBinder::transact() -> IPCThreadState::transact()
c++
status_t IPCThreadState::transact(int32_t handle,
uint32_t code, const Parcel& data,
Parcel* reply, uint32_t flags)
{
status_t err = data.errorCheck();
flags |= TF_ACCEPT_FDS;
...
if (err == NO_ERROR) {
// 1
err = writeTransactionData(BC_TRANSACTION, flags, handle, code, data, NULL);
}
if (err != NO_ERROR) {
if (reply) reply->setError(err);
return (mLastError = err);
}
if ((flags & TF_ONE_WAY) == 0) {
...
// 2
if (reply) {
err = waitForResponse(reply);
} else {
Parcel fakeReply;
err = waitForResponse(&fakeReply);
}
...
} else {
...
}
return err;
}
-
函数入参:
handle:传入的BpBinder中的mHandle,值为0。前面ServiceManager的获取篇中有介绍,创建BpBinder对象时赋值。表示向ServiceManager发起通信。
code:ADD_SERVICE_TRANSACTION
data:携带请求数据的Parcel
reply:用来接收对端反馈数据的Parcel
flags:是默认值0。
-
1 通过writeTransactionData()将数据打包。
-
2(仅讨论非异步)数据打包完成后,调用waitForResponse()准备请求Binder驱动。
1> writeTransactionData()
读出前面打包到Parcel中的数据,然后将其打包到binder驱动可识别的 binder_transaction_data 结构体tr中,并将ADD_SERVICE_TRANSACTION命令和结构体tr打包到Parcel mOut中。
BpBinder::transact() -> IPCThreadState::transact() -> IPCThreadState::writeTransactionData()
c++
status_t IPCThreadState::writeTransactionData(int32_t cmd, uint32_t binderFlags,
int32_t handle, uint32_t code, const Parcel& data, status_t* statusBuffer)
{
// cmd: BC_TRANSACTION code: ADD_SERVICE_TRANSACTION
binder_transaction_data tr;
tr.target.ptr = 0;
tr.target.handle = handle;
tr.code = code; // code: ADD_SERVICE_TRANSACTION
tr.flags = binderFlags;
tr.cookie = 0;
tr.sender_pid = 0;
tr.sender_euid = 0;
const status_t err = data.errorCheck();
if (err == NO_ERROR) {
tr.data_size = data.ipcDataSize();
tr.data.ptr.buffer = data.ipcData();
tr.offsets_size = data.ipcObjectsCount()*sizeof(binder_size_t);
tr.data.ptr.offsets = data.ipcObjects();
} else if (statusBuffer) {
...
} else {
...
}
mOut.writeInt32(cmd); // cmd: BC_TRANSACTION
mOut.write(&tr, sizeof(tr));
return NO_ERROR;
}
- ipcDataSize():返回mDataSize:是mData内容的长度,同时也用作指向数据末尾,指示后续写入数据的起始位置
- ipcData():返回mData:是数据的基地址
- ipcObjectsCount():返回mObjectsSize:保存对象的总个数,值是对象个数*sizeof(binder_int_t)
- ipcObjects:返回mObjects:保存各个对象的地址偏移的数组
这几个parcel的数据后续会有单独文章讲parcel时讲解,本文简单用图描述一下,帮助理解:
赋值后结果:
C++
tr.target.handle = handler; //0, 即ServiceManager代理对象的默认句柄
tr.code = code; // ADD_SERVICE_TRANSACTION
tr.flags = binderFlags; // TF_ACCEPT_FDS
tr.cookie = 0;
tr.sender_pid = 0;
tr.data_size = data.ipcDataSize(); //数据大小(对应mDataSize)
tr.data.ptr.buffer = data.ipcData(); //数据的起始地址(对应mData)
tr.offsets_size =
data.ipcObjectsCount()*sizeof(binder_size_t); // data中保存的对象个数(对应mObjectsSize)
tr.data.ptr.offsets = data.ipcObjects(); // data中保存的对象的偏移地址数组(对应mObjects)
2> waitForResponse()
BpBinder::transact() -> IPCThreadState::transact() -> IPCThreadState::waitForResponse()
C++
status_t IPCThreadState::waitForResponse(Parcel *reply, status_t *acquireResult)
{
uint32_t cmd;
int32_t err;
while (1) {
if ((err=talkWithDriver()) < NO_ERROR) break;
...
if (mIn.dataAvail() == 0) continue;
cmd = (uint32_t)mIn.readInt32();
...
switch (cmd) {
case BR_TRANSACTION_COMPLETE:
...
case BR_DEAD_REPLY:
...
case BR_FAILED_REPLY:
...
case BR_ACQUIRE_RESULT:
...
case BR_REPLY:
...
default:
err = executeCommand(cmd);
if (err != NO_ERROR) goto finish;
break;
}
}
finish:
...
return err;
}
talkWithDriver()
BpBinder::transact() -> IPCThreadState::transact() -> IPCThreadState::waitForResponse() -> IPCThreadState::talkWithDriver()
c++
status_t IPCThreadState::talkWithDriver(bool doReceive)
{
...
binder_write_read bwr;
// 1 mIn.dataPosition() == mIn.dataSize() == 0,needRead=true
const bool needRead = mIn.dataPosition() >= mIn.dataSize();
// 2 !doReceive==false,needRead==true,outAvail = mOut.dataSize()
const size_t outAvail = (!doReceive || needRead) ? mOut.dataSize() : 0;
bwr.write_size = outAvail;
// *** 数据
bwr.write_buffer = (uintptr_t)mOut.data();
if (doReceive && needRead) {
bwr.read_size = mIn.dataCapacity();
bwr.read_buffer = (uintptr_t)mIn.data();
} else {
bwr.read_size = 0;
bwr.read_buffer = 0;
}
...
if ((bwr.write_size == 0) && (bwr.read_size == 0)) return NO_ERROR;
bwr.write_consumed = 0;
bwr.read_consumed = 0;
status_t err;
// 3
do {
...
if (ioctl(mProcess->mDriverFD, BINDER_WRITE_READ, &bwr) >= 0)
err = NO_ERROR;
else
err = -errno;
...
} while (err == -EINTR);
...
binder_write_read结构体:可以表现数据的大小、和消费情况,他包含有要写给binder驱动的数据和将要从binder驱动读出的数据,且读写数据是分开的。
函数就是先把数据封装成binder_write_read bwr,然后将bwr通过ioctl()发送给Binder驱动。详细分析:
入参:doReceive是默认值为true。
- 当前mIn中还没有被写入数据,值都是初始值。那么,mIn.dataPosition()返回mDataPos值为0;mIn.dataSize()返回mDataSize初始值也为0。因此,needRead=true。
- doReceive=true,needRead=true;因此赋值outAvail=mOut.dataSize 非0。接下来就是对bwr进行初始化,初始化完毕之后各个成员的值如下:
c++
bwr.write_size = outAvail; //mOut中数据大小,大于0
bwr.write_buffer = (long unsigned int)mOut.data(); // mOut.mData,数据基地址
bwr.write_consumed = 0;
bwr.read_size = mIn.dataCapacity(); // 256
bwr.read_buffer = (long unsigned int)mIn.data(); // mIn.mData,实际上为空
bwr.read_consumed = 0;
- bwr初始化完成之后,调用ioctl(,BINDER_WRITE_READ,),来和Binder驱动进行交互。 打包后的数据binder_write_read如下图所示:
- ioctl() 传输的数据包含 命令"BINDER_WRITE_READ"+"binder_write_read结构体对象"。
- 在binder_write_read的write_buffer中包含了待传输数据包;
- 待传输数据的data中又包含了flat_binder_object和add_service命令等数据。
- flat_binder_object中就包含了需要传输的MediaPlayerService对象。
2.3 进入驱动(KERNEL)
>2.3.1 binder_ioctl
BpBinder::transact() -> IPCThreadState::transact() -> IPCThreadState::waitForResponse() -> IPCThreadState::talkWithDriver() -> binder_ioctl
c
static long binder_ioctl(struct file *filp, unsigned int cmd, unsigned long arg)
{
int ret;
struct binder_proc *proc = filp->private_data;
struct binder_thread *thread;
unsigned int size = _IOC_SIZE(cmd);
void __user *ubuf = (void __user *)arg;
...
binder_lock(__func__);
//在proc进程中查找该线程对应binder_thread;查找失败则新建binder_thread添加到proc->threads中。
thread = binder_get_thread(proc);
...
switch (cmd) {
// BINDER_WRITE_READ 命令
case BINDER_WRITE_READ:
ret = binder_ioctl_write_read(filp, cmd, arg, thread);
if (ret)
goto err;
break;
...
}
BINDER_WRITE_READ命令指向调用 binder_ioctl_write_read
2.3.2 binder_ioctl_write_read
BpBinder::transact() -> IPCThreadState::transact() -> IPCThreadState::waitForResponse() -> IPCThreadState::talkWithDriver() -> binder_ioctl -> binder_ioctl_write_read
C
static int binder_ioctl_write_read(struct file *filp,
unsigned int cmd, unsigned long arg,
struct binder_thread *thread)
{
int ret = 0;
struct binder_proc *proc = filp->private_data;
unsigned int size = _IOC_SIZE(cmd);
void __user *ubuf = (void __user *)arg;
struct binder_write_read bwr;
...
if (copy_from_user(&bwr, ubuf, sizeof(bwr))) {
ret = -EFAULT;
goto out;
}
if (bwr.write_size > 0) {
ret = binder_thread_write(proc, thread,
bwr.write_buffer,
bwr.write_size,
&bwr.write_consumed);
...
}
if (bwr.read_size > 0) {
ret = binder_thread_read(proc, thread, bwr.read_buffer,
bwr.read_size,
&bwr.read_consumed,
filp->f_flags & O_NONBLOCK);
...
}
if (copy_to_user(ubuf, &bwr, sizeof(bwr))) {
ret = -EFAULT;
goto out;
}
out:
return ret;
}
前面讲过了,write_size和read_size都大于0,读写都会执行。
2.3.3 binder_thread_write
BpBinder::transact() -> IPCThreadState::transact() -> IPCThreadState::waitForResponse() -> IPCThreadState::talkWithDriver() -> binder_ioctl -> binder_ioctl_write_read -> binder_thread_write
C
static int binder_thread_write(struct binder_proc *proc,
struct binder_thread *thread,
binder_uintptr_t binder_buffer, size_t size,
binder_size_t *consumed)
{
uint32_t cmd;
void __user *buffer = (void __user *)(uintptr_t)binder_buffer;
void __user *ptr = buffer + *consumed;
void __user *end = buffer + size;
//读取 binder_write_read.write_buffer 中的内容。
while (ptr < end && thread->return_error == BR_OK) {
// 从用户空间读取到内核中,并赋值给cmd。
// BC_TRANSACTION
if (get_user(cmd, (uint32_t __user *)ptr))
return -EFAULT;
ptr += sizeof(uint32_t);
...
switch (cmd) {
...
case BC_TRANSACTION:
case BC_REPLY: {
struct binder_transaction_data tr;
// 读出用户空间组织的binder_transaction_data数据
if (copy_from_user(&tr, ptr, sizeof(tr)))
return -EFAULT;
ptr += sizeof(tr);
binder_transaction(proc, thread, &tr, cmd == BC_REPLY);
break;
}
...
}
//更新bwr.write_consumed的值
*consumed = ptr - buffer;
}
return 0;
}
-
从用户空间获取命令码BC_TRANSACTION,走 case BC_TRANSACTION。
-
通过copy_from_user将数据binder_transaction_data从用户空间拷贝到内核空间,接着调用binder_transaction()进行处理。
2.3.4 binder_transaction
BpBinder::transact() -> IPCThreadState::transact() -> IPCThreadState::waitForResponse() -> IPCThreadState::talkWithDriver() -> binder_ioctl -> binder_ioctl_write_read -> binder_thread_write -> binder_transaction
- >第一部分
获取对端binder_node、binder_proc、和待处理队列等
C
static void binder_transaction(struct binder_proc *proc,
struct binder_thread *thread, // 发起者的binder线程
struct binder_transaction_data *tr, int reply)
{
struct binder_transaction *t;
struct binder_work *tcomplete;
binder_size_t *offp, *off_end;
binder_size_t off_min;
struct binder_proc *target_proc;
struct binder_thread *target_thread = NULL;
struct binder_node *target_node = NULL;
struct list_head *target_list;
wait_queue_head_t *target_wait;
struct binder_transaction *in_reply_to = NULL;
struct binder_transaction_log_entry *e;
uint32_t return_error;
...
//非replay分支
if (reply) {
...
} else {
// 1
if (tr->target.handle) { // 此时handle的值为0
...
} else {
// 1
//事务目标对象是ServiceManager得Binder实体
//即该事务是交给ServiceManager来处理的
target_node = binder_context_mgr_node;
...
}
e->to_node = target_node->debug_id;
// 2 得到目标进程的binder_proc
target_proc = target_node->proc;
...
// 3 尝试获取目标thread
if (!(tr->flags & TF_ONE_WAY) && thread->transaction_stack) {
struct binder_transaction *tmp;
tmp = thread->transaction_stack;
...
// 遍历发起者binder线程的事务栈寻找目标的binder_thread
while (tmp) {
if (tmp->from && tmp->from->proc == target_proc)
target_thread = tmp->from;
tmp = tmp->from_parent;
}
}
}
if (target_thread) {
...
} else {
// 4 定义该引用是目标的待处理队列,
// 后续使用 list_add_tail 把新任务加到队列中
target_list = &target_proc->todo;
target_wait = &target_proc->wait;
}
...
- 获取目标binder实体:tr->target.handle=0 是ServiceManager的描述,走else分支赋值目标binder实体为binder_context_mgr_node。(之前文章SM注册为管理者时有介绍,这个是SM的binder实体)
- 通过SM的binder实体获取它的binder_proc
- 尝试获取目标binder_thread
- 获取todo、wait队列:可通过2获取的binder_thread或binder_proc获取todo队列和wait队列
得到如下内容:
C
target_node = binder_context_mgr_node; // 目标节点为SM对应的Binder实体
target_proc = target_node->proc; // 目标进程为SM对应的binder_proc进程上下文信息
target_list = &target_proc->todo; // 待处理事务队列
target_wait = &target_proc->wait; // 等待队列
- >第二部分1
创建两个待处理事务,分别给对端和自己。1给事务赋值相关数据、指令
C
//分配一个待处理事务t,t是binder事务(binder_transaction对象)
t = kzalloc(sizeof(*t), GFP_KERNEL);
...
//分配一个待完成的工作tcomplete,tcomplete是binder_work对象
tcomplete = kzalloc(sizeof(*tcomplete), GFP_KERNEL);
...
// 3 ---------
//设置from,表示该事务是MediaPlayerService发起的
if (!reply && !(tr->flags & TF_ONE_WAY))
t->from = thread;
else
t->from = NULL;
//下面的一些赋值是初始化事务t
t->sender_euid = proc->tsk->cred->euid;
//事务将交给target_proc进程进行处理
t->to_proc = target_proc;
//事务将交给target_thread线程进行处理
t->to_thread = target_thread;
//事务编码
t->code = tr->code;
//事务标志
t->flags = tr->flags;
//事务优先级
t->priority = task_nice(current);
...
//分配空间
t->buffer = binder_alloc_buf(target_proc, tr->data_size,
tr->offsets_size, !reply && (t->flags & TF_ONE_WAY));
...
t->buffer->allow_user_free = 0;
t->buffer->debug_id = t->debug_id;
//保存事务
t->buffer->transaction = t;
// 保存事务的目标对象(即处理该事务的binder对象)
t->buffer->target_node = target_node;
if (target_node)
// 由于t对target_node的引用,增加强引用计数
binder_inc_node(target_node, 1, 0, NULL);
// 偏移数组起始位置
offp = (binder_size_t *)(t->buffer->data +
ALIGN(tr->data_size, sizeof(void *)));
// 4 ---------
// tr的数据缓冲区读出给 t
if (copy_from_user(t->buffer->data, (const void __user *)(uintptr_t)
tr->data.ptr.buffer, tr->data_size)) {
...
}
// 拷贝之后,offp就是flat_binder_object对象数组在内核空间的偏移数组的起始地址
if (copy_from_user(offp, (const void __user *)(uintptr_t)
tr->data.ptr.offsets, tr->offsets_size)) {
...
}
...
// off_end就是flat_binder_object对象数组在内核空间的偏移地址的结束地址
off_end = (void *)offp + tr->offsets_size;
-
创建binder_transaction t:后续封装成 type = BINDER_WORK_TRANSACTION 的事务作为目标进程的待处理事务加入到目标进程的todo队列。(本文中就是Client的对端SM)
-
创建binder_work tcomplete:后续封装成 type = BINDER_WORK_TRANSACTION_COMPLETE 的事务作为发起端进程的待处理事务加入到发起端进程的todo队列。(Client端自己)
-
给待处理事务 t 填充一些相关数据
- 给事务 t 赋予发起者thread、目标proc、目标thread等
- 给事务 t 分配内核缓冲区t->buffer分配空间,并给buffer赋值目标binder实体等
-
从用户空间将待传输数据拷贝到内核。
- >第二部分2
处理带传输的binder数据flat_binder_object,就是我们一开始发起addservice时携带的包装起来的MediaPlayerService对象。
C
for (; offp < off_end; offp++) {
struct flat_binder_object *fp;
...
fp = (struct flat_binder_object *)(t->buffer->data + *offp);
off_min = *offp + sizeof(struct flat_binder_object);
switch (fp->type) {
case BINDER_TYPE_BINDER:
case BINDER_TYPE_WEAK_BINDER: {
struct binder_ref *ref;
// 1获取传输内容Media服务的binder实体对象
struct binder_node *node = binder_get_node(proc, fp->binder);
if (node == NULL) {
// 1没获取到,需要为Media创建新的binder实体
node = binder_new_node(proc, fp->binder, fp->cookie);
...
node->min_priority = fp->flags & FLAT_BINDER_FLAG_PRIORITY_MASK;
node->accept_fds = !!(fp->flags & FLAT_BINDER_FLAG_ACCEPTS_FDS);
}
...
// 2 获取对应该binder实体的binder引用,如果已有直接获取,没有则新建
ref = binder_get_ref_for_node(target_proc, node);
...
// 3转换type为BINDER_TYPE_HANDLE表示转换为binder引用
if (fp->type == BINDER_TYPE_BINDER)
fp->type = BINDER_TYPE_HANDLE;
else
fp->type = BINDER_TYPE_WEAK_HANDLE;
// 给handle赋值binder引用的描述
fp->handle = ref->desc;
binder_inc_ref(ref, fp->type == BINDER_TYPE_HANDLE,
&thread->todo);
...
} break;
1> 获取带传输的对象的binder实体,通过调用binder_get_node获取,如果没获取到,通过binder_new_node创建。
c
static struct binder_node *binder_get_node(struct binder_proc *proc,
binder_uintptr_t ptr)
{
struct rb_node *n = proc->nodes.rb_node;
struct binder_node *node;
while (n) {
node = rb_entry(n, struct binder_node, rb_node);
if (ptr < node->ptr)
n = n->rb_left;
else if (ptr > node->ptr)
n = n->rb_right;
else
return node;
}
return NULL;
}
c
static struct binder_node *binder_new_node(struct binder_proc *proc,
binder_uintptr_t ptr,
binder_uintptr_t cookie)
{
struct rb_node **p = &proc->nodes.rb_node;
struct rb_node *parent = NULL;
struct binder_node *node;
while (*p) {
parent = *p;
node = rb_entry(parent, struct binder_node, rb_node);
if (ptr < node->ptr)
p = &(*p)->rb_left;
else if (ptr > node->ptr)
p = &(*p)->rb_right;
else
return NULL;
}
//
node = kzalloc(sizeof(*node), GFP_KERNEL);
if (node == NULL)
return NULL;
binder_stats_created(BINDER_STAT_NODE);
rb_link_node(&node->rb_node, parent, p);
rb_insert_color(&node->rb_node, &proc->nodes);
node->debug_id = ++binder_last_id;
node->proc = proc;
node->ptr = ptr;
node->cookie = cookie;
node->work.type = BINDER_WORK_NODE;
INIT_LIST_HEAD(&node->work.entry);
INIT_LIST_HEAD(&node->async_todo);
...
return node;
}
2> BINDER_TYPE_BINDER表示这是一个Service组件需要创建binder实体,对于binder实体需要创建binder引用对象来引用它。调用binder_get_ref_for_node获取/创建binder引用。 binder_get_ref_for_node:
C
static struct binder_ref *binder_get_ref_for_node(struct binder_proc *proc,
struct binder_node *node)
{
struct rb_node *n;
struct rb_node **p = &proc->refs_by_node.rb_node;
struct rb_node *parent = NULL;
struct binder_ref *ref, *new_ref;
struct binder_context *context = proc->context;
// 1进程的binder引用红黑树中查找我们要找的binder引用并返回
while (*p) {
parent = *p;
ref = rb_entry(parent, struct binder_ref, rb_node_node);
if (node < ref->node)
p = &(*p)->rb_left;
else if (node > ref->node)
p = &(*p)->rb_right;
else
return ref;
}
// 2找不到,创建
new_ref = kzalloc(sizeof(*ref), GFP_KERNEL);
...
// 初始化
binder_stats_created(BINDER_STAT_REF);
new_ref->debug_id = ++binder_last_id;
new_ref->proc = proc; // 目标进程,即要持有该引用的进程(本例是SM)
new_ref->node = node; // binder实体(本例中是media)
// 3插入目标进程的 refs_by_node
rb_link_node(&new_ref->rb_node_node, parent, p);
rb_insert_color(&new_ref->rb_node_node, &proc->refs_by_node);
// 4计算新建的引用的描述,赋值给new_ref->desc
new_ref->desc = (node == context->binder_context_mgr_node) ? 0 : 1;
for (n = rb_first(&proc->refs_by_desc); n != NULL; n = rb_next(n)) {
ref = rb_entry(n, struct binder_ref, rb_node_desc);
if (ref->desc > new_ref->desc)
break;
new_ref->desc = ref->desc + 1;
}
p = &proc->refs_by_desc.rb_node;
// 确认新分配的描述符是否有效
while (*p) {
parent = *p;
ref = rb_entry(parent, struct binder_ref, rb_node_desc);
if (new_ref->desc < ref->desc)
p = &(*p)->rb_left;
else if (new_ref->desc > ref->desc)
p = &(*p)->rb_right;
else
BUG();
}
// 5插入目标进程的 refs_by_desc
rb_link_node(&new_ref->rb_node_desc, parent, p);
rb_insert_color(&new_ref->rb_node_desc, &proc->refs_by_desc);
if (node) {
// 6 binder引用对象new_ref添加到他所引用的binder实体对象的引用对象列表中
hlist_add_head(&new_ref->node_entry, &node->refs);
...
} else {
...
}
return new_ref;
}
- 尝试获取binder引用:先从目标proc中尝试获取,看目标proc(SM)是否已经有该 binder实体(Media)对应的binder引用。本例以首次请求找不到已有引用,就需要创建以及初始化。
- Binder引用创建及赋值:内容包括目标进程、Service的binder实体等,
- 保存binder引用:把binder引用插入目标进程(sm)的 refs_by_node 红黑树,
- 计算binder引用描述,
- 以binder引用描述插入到目标进程(sm)的 refs_by_desc 红黑树,
- 将binder引用对象插入到它所引用binder实体(Media)对象的专门保存其引用的列表中
3> 转换type BINDER_TYPE_BINDER 为 BINDER_TYPE_HANDLE。
4> 增加引用计数,保证binder引用对象不被销毁
- >第三部分
把两个事务分别放入对端、自己的待处理队列,并唤醒对端去处理。
C
if (reply) {
...
} else if (!(t->flags & TF_ONE_WAY)) {
t->need_reply = 1; // 标记非异步
// 1.事务t压入源线程的事务栈
t->from_parent = thread->transaction_stack;
thread->transaction_stack = t;
} else {
// 异步的处理
...
}
// 2.t事务的类型标记为 BINDER_WORK_TRANSACTION
t->work.type = BINDER_WORK_TRANSACTION;
// 加入到目标进程/线程列表
list_add_tail(&t->work.entry, target_list);
// 3.tcomplete事务的类型标记为 BINDER_WORK_TRANSACTION_COMPLETE
tcomplete->type = BINDER_WORK_TRANSACTION_COMPLETE;
// 加入到源进程的binder线程的todo队列
list_add_tail(&tcomplete->entry, &thread->todo);
// 4.唤醒目标进程的等待
if (target_wait)
wake_up_interruptible(target_wait);
return;
- 目标待处理事务t 压入发起者的binder_thread的事务堆栈
- 事务t 的类型标记为 BINDER_WORK_TRANSACTION,加入到对端进程/线程的列表
- 事务tcomplete 的类型标记为 BINDER_WORK_TRANSACTION_COMPLETE,加入到源进程的binder线程的todo队列
- wake_up_interruptible(target_wait) 唤醒目标进程SM的等待
binder_thread_write 剩余部分
BpBinder::transact() -> IPCThreadState::transact() -> IPCThreadState::waitForResponse() -> IPCThreadState::talkWithDriver() -> binder_ioctl -> binder_ioctl_write_read -> binder_thread_write -> binder_transaction
C
static int binder_thread_write(struct binder_proc *proc,
struct binder_thread *thread,
binder_uintptr_t binder_buffer, size_t size,
binder_size_t *consumed)
{
uint32_t cmd;
void __user *buffer = (void __user *)(uintptr_t)binder_buffer;
void __user *ptr = buffer + *consumed;
void __user *end = buffer + size;
//读取binder_write_read.write_buffer中的内容。
//每次读取32bit(即四个字节)
while (ptr < end && thread->return_error == BR_OK) {
// 从用户空间读取32bit到内核中,并赋值给cmd。
if (get_user(cmd, (uint32_t __user *)ptr))
return -EFAULT;
ptr += sizeof(uint32_t);
...
switch (cmd) {
...
case BC_TRANSACTION:
case BC_REPLY: {
struct binder_transaction_data tr;
if (copy_from_user(&tr, ptr, sizeof(tr)))
return -EFAULT;
ptr += sizeof(tr);
binder_transaction(proc, thread, &tr, cmd == BC_REPLY);
break;
}
...
}
//更新bwr.write_consumed的值
*consumed = ptr - buffer;
}
return 0;
}
跳出binder_transaction()继续请求端的内容,此时binder_thread_write()执行binder_transaction()完毕,更新*consumed的值,consumed被赋值,binder_thread_write完毕。
binder_ioctl_write_read 剩余部分
BpBinder::transact() -> IPCThreadState::transact() -> IPCThreadState::waitForResponse() -> IPCThreadState::talkWithDriver() -> binder_ioctl -> binder_ioctl_write_read
C
static int binder_ioctl_write_read(struct file *filp,
unsigned int cmd, unsigned long arg,
struct binder_thread *thread)
{
...
if (bwr.write_size > 0) {
ret = binder_thread_write(proc, thread,
bwr.write_buffer,
bwr.write_size,
&bwr.write_consumed);
trace_binder_write_done(ret);
if (ret < 0) {
bwr.read_consumed = 0;
if (copy_to_user(ubuf, &bwr, sizeof(bwr)))
ret = -EFAULT;
goto out;
}
}
// >>> 执行到这里
if (bwr.read_size > 0) {
ret = binder_thread_read(proc, thread, bwr.read_buffer,
bwr.read_size,
&bwr.read_consumed,
filp->f_flags & O_NONBLOCK);
trace_binder_read_done(ret);
if (!list_empty(&proc->todo))
wake_up_interruptible(&proc->wait);
if (ret < 0) {
if (copy_to_user(ubuf, &bwr, sizeof(bwr)))
ret = -EFAULT;
goto out;
}
}
...
2.3.5 binder_thread_read:
BpBinder::transact() -> IPCThreadState::transact() -> IPCThreadState::waitForResponse() -> IPCThreadState::talkWithDriver() -> binder_ioctl -> binder_ioctl_write_read -> binder_thread_read
C
static int binder_thread_read(struct binder_proc *proc,
struct binder_thread *thread,
binder_uintptr_t binder_buffer, size_t size,
binder_size_t *consumed, int non_block)
{
void __user *buffer = (void __user *)(uintptr_t)binder_buffer;
void __user *ptr = buffer + *consumed;
void __user *end = buffer + size;
int ret = 0;
int wait_for_proc_work;
//如果*consumed=0,则写入BR_NOOP到用户传进来的bwr.read_buffer缓存区
//表示读取数据开始(这个是Binder驱动约定俗成的)
if (*consumed == 0) {
if (put_user(BR_NOOP, (uint32_t __user *)ptr))
return -EFAULT;
ptr += sizeof(uint32_t);
}
...
while (1) {
uint32_t cmd;
struct binder_transaction_data tr;
struct binder_work *w;
struct binder_transaction *t = NULL;
//如果当前线程的"待完成工作"不为空,则取出待完成工作
if (!list_empty(&thread->todo)) // 首次循环,进入
// list_first_entry 实际是 container_of 实现
// 前面在binder_thread_write中list_add_tail(&tcomplete->entry, &thread->todo)添加的本事务,
// 因此这里取出的是 tcomplete 的 BINDER_WORK_TRANSACTION_COMPLETE 事件
w = list_first_entry(&thread->todo, struct binder_work, entry);
else if (!list_empty(&proc->todo) && wait_for_proc_work)
...
else { // 第二次循环没有待处理工作了,会走进else,break 跳出
...
break;
}
...
// BINDER_WORK_TRANSACTION_COMPLETE
switch (w->type) {
...
case BINDER_WORK_TRANSACTION_COMPLETE: {
cmd = BR_TRANSACTION_COMPLETE;
// 将BR_TRANSACTION_COMPLETE写入到用户缓冲空间中
if (put_user(cmd, (uint32_t __user *)ptr))
return -EFAULT;
ptr += sizeof(uint32_t);
binder_stat_br(proc, thread, cmd);
...
// 待完成事务已经处理完毕,将其从待完成事务队列中删除
list_del(&w->entry);
kfree(w);
binder_stats_deleted(BINDER_STAT_TRANSACTION_COMPLETE);
} break;
...
}
// case BINDER_WORK_TRANSACTION_COMPLETE 没有赋值 t, 这里continue
if (!t)
continue;
...
}
...
// 更新bwr.read_consumed的值
*consumed = ptr - buffer;
...
return 0;
}
入参:
- proc:当前进程。
- thread:当前进程处理binder的线程;
- binder_buffer:bwr.read_buffer,是反馈数据缓冲区。
- size:bwr.read_size,是缓冲区大小,为256字节;
- consumed:bwr.read_consumed,值是0,表示反馈数据还没有被MediaPlayerService读取过。
- non_block:0。
代码中添加了注释,整体过程是
- 添加命令BR_NOOP到用户空间
- 然后获取到前面binder_thread_write添加给自己的BINDER_WORK_TRANSACTION_COMPLETE 事件,
- BINDER_WORK_TRANSACTION_COMPLETE 事件的处理是写一个BR_TRANSACTION_COMPLETE 命令给用户空间
- 更新bwr.read_consumed
至此binder_ioctl()已执行完,驱动代码执行完毕,将要回到用户空间继续分析。此时,bwr中各个参数的值如下:
C
bwr.write_size = outAvail;
bwr.write_buffer = (long unsigned int)mOut.data();
bwr.write_consumed = outAvail; // 等于write_size
bwr.read_size = mIn.dataCapacity();
bwr.read_buffer = (long unsigned int)mIn.data(); // 存储了BR_NOOP和BR_TRANSACTION_COMPLETE
// 两个返回指令。
bwr.read_consumed = 8; // 等于read_size
- 此时write_consumed == write_size,意味着 "Binder驱动已经将请求的内容都处理完毕";
- read_consumed>0,意味着 "Binder驱动有反馈内容给MediaPlayerService"。
2.4 回到talkWithDriver (USER)
执行ioctl()完成后的动作:
BpBinder::transact() -> IPCThreadState::transact() -> IPCThreadState::waitForResponse() -> IPCThreadState::talkWithDriver() <- binder_ioctl
C
status_t IPCThreadState::talkWithDriver(bool doReceive)
{
...
status_t err;
do {
...
// 返回 0
if (ioctl(mProcess->mDriverFD, BINDER_WRITE_READ, &bwr) >= 0)
err = NO_ERROR;
else
err = -errno;
...
} while (err == -EINTR);
if (err >= NO_ERROR) {
//清空已写的数据
if (bwr.write_consumed > 0) {
if (bwr.write_consumed < mOut.dataSize())
mOut.remove(0, bwr.write_consumed);
else
mOut.setDataSize(0);
}
//设置已读数据
if (bwr.read_consumed > 0) {
mIn.setDataSize(bwr.read_consumed);
mIn.setDataPosition(0);
}
...
return NO_ERROR;
}
return err;
}
ioctl()从驱动层返回以后的返回值为0,所以err=NO_ERROR,此时退出while循环。接着代码往下执行:
-
bwr.write_consumed>0,并且bwr.write_consumed=mOut.dataSize。
- 调用mOut.setDataSize(0) 释放mOut的内存,
- 并且mOut的 mDataSize和mObjectsSize 设为0。
-
bwr.read_consumed>0,
- 调用mIn.setDataSize()为mIn分配空间,并将mIn的mDataSize设为=bwr.read_consumed。
- 然后,将位置mDataPos初始化为0。
跳出talkWithDriver(),返回到waitForResponse()中:
2.4.1 waitForResponse
BpBinder::transact() -> IPCThreadState::transact() -> IPCThreadState::waitForResponse() <- IPCThreadState::talkWithDriver()
C
status_t IPCThreadState::waitForResponse(Parcel *reply, status_t *acquireResult)
{
uint32_t cmd;
int32_t err;
while (1) {
if ((err=talkWithDriver()) < NO_ERROR) break;
...
if (mIn.dataAvail() == 0) continue;
cmd = (uint32_t)mIn.readInt32();
...
switch (cmd) {
case BR_TRANSACTION_COMPLETE:
...
case BR_DEAD_REPLY:
...
case BR_FAILED_REPLY:
...
case BR_ACQUIRE_RESULT:
...
case BR_REPLY:
...
default:
err = executeCommand(cmd);
if (err != NO_ERROR) goto finish;
break;
}
}
finish:
...
return err;
}
从talkWithDriver()正常返回执行后续代码读取mIn数据。mIn中携带Binder驱动返回的BR_NOOP和BR_TRANSACTION_COMPLETE两个指令。
-
先读出的指令是BR_NOOP,执行executeCommand(BR_NOOP)。BR_NOOP实际没有任何处理,省略BR_NOOP分析。
-
循环再次执行 talkWithDriver,执行指令 BR_TRANSACTION_COMPLETE:
这里取两个命令的方式:是while(1)循环调用talkWithDriver,talkWithDriver里判断还有命令没消费直接跳出回到waitForResponse的循环体内继续读出命令做处理。代码如下:
C++
status_t IPCThreadState::talkWithDriver(bool doReceive)
{
if (mProcess->mDriverFD <= 0) {
return -EBADF;
}
binder_write_read bwr;
// 命令没读完还有数据,则dataPosition坐标就肯定还没有到末尾,即没有到dataSize为止,needRead = false
const bool needRead = mIn.dataPosition() >= mIn.dataSize();
// needRead = false,则 outAvail = 0
const size_t outAvail = (!doReceive || needRead) ? mOut.dataSize() : 0;
// 0
bwr.write_size = outAvail;
bwr.write_buffer = (uintptr_t)mOut.data();
// needRead = false
if (doReceive && needRead) {
bwr.read_size = mIn.dataCapacity();
bwr.read_buffer = (uintptr_t)mIn.data();
} else {
// 0
bwr.read_size = 0;
bwr.read_buffer = 0;
}
...
// return,回到waitForResponse取命令执行,而不会继续往下走ioctl请求驱动的逻辑。
if ((bwr.write_size == 0) && (bwr.read_size == 0)) return NO_ERROR;
...
至此,传输请求完毕,再次进入talkWithDriver(因为waitForResponse内是一个循环执行talkWithDriver),mOut已经处理完,datasize已清空(2.4刚开始时清空),mIn为空,即needRead = mIn.dataPosition() >= mIn.dataSize() = true,赋值bwr.read_size = mIn.dataCapacity(),从而再次进入ioctl时只执行binder_thread_read。
2.4.2 再次进入驱动
直接看binder_thread_read:
BpBinder::transact() -> IPCThreadState::transact() -> IPCThreadState::waitForResponse() -> IPCThreadState::talkWithDriver() -> binder_ioctl -> binder_ioctl_write_read -> binder_thread_read
C
static int binder_thread_read(struct binder_proc *proc,
struct binder_thread *thread,
binder_uintptr_t binder_buffer, size_t size,
binder_size_t *consumed, int non_block)
{
void __user *buffer = (void __user *)(uintptr_t)binder_buffer;
void __user *ptr = buffer + *consumed;
void __user *end = buffer + size;
int ret = 0;
int wait_for_proc_work;
// 1 如果*consumed=0,则写入BR_NOOP到用户传进来的bwr.read_buffer缓存区
if (*consumed == 0) {
if (put_user(BR_NOOP, (uint32_t __user *)ptr))
return -EFAULT;
// 修改指针位置
ptr += sizeof(uint32_t);
}
retry:
// 2 thread->transaction_stack非空,里面放着对端待处理的任务,false
wait_for_proc_work = thread->transaction_stack == NULL &&
list_empty(&thread->todo);
...
thread->looper |= BINDER_LOOPER_STATE_WAITING;
if (wait_for_proc_work)
proc->ready_threads++;
binder_unlock(__func__);
...
if (wait_for_proc_work) { // false
...
} else {
if (non_block) {
if (!binder_has_thread_work(thread))
ret = -EAGAIN;
} else
// 3 进入等待
ret = wait_event_freezable(thread->wait, binder_has_thread_work(thread));
}
- bwr.read_consumed=0,*consumed=0。还是先将BR_NOOP写入到bwr.read_buffer中。
- thread->transaction_stack是对端待处理事务,因此wait_for_proc_work==false。
- 调用wait_event_freezable。当前线程会进入中断等待状态,等待唤醒。(当SM处理完MediaPlayerService的请求之后,就会将其唤醒。)