Binder机制 - getService 获取服务

概述

获取服务是通过调用ServiceManager提供的getService接口实现的,过程与addService注册服务大致相同。区别在于注册服务是Client端携带对象传给服务端,而获取服务是从服务端反馈数据给客户端。

因此本文重点研究差异点,以及建立了什么联系使得两个进程能够建立起通信来。

本文的讲解结合注册服务篇,相同流程会有省略,重点讲解差异点。

正文

client端发起 getService

本文还是以native层调用方式为切入口讲解

C++ 复制代码
IMediaDeathNotifier::getMediaPlayerService()
{
    Mutex::Autolock _l(sServiceLock);
    if (sMediaPlayerService == 0) {
        sp<IServiceManager> sm = defaultServiceManager();
        sp<IBinder> binder;
        do {
            binder = sm->getService(String16("media.player"));
            if (binder != 0) {
                break;
            }
            ALOGW("Media player service not published, waiting...");
            usleep(500000); // 0.5 s
        } while (true);

        ...
    }
    return sMediaPlayerService;
}

defaultServiceManager() 获取SM,调用getService请求获取服务,服务名 "media.player"

getService

C++ 复制代码
virtual sp<IBinder> getService(const String16& name) const
{
    unsigned n;
    // 假如获取失败,重试,最多请求5次
    for (n = 0; n < 5; n++){
        sp<IBinder> svc = checkService(name);
        if (svc != NULL) return svc;
        sleep(1);
    }
    return NULL;
}

checkService

C++ 复制代码
virtual sp<IBinder> checkService( const String16& name) const
{
    Parcel data, reply;
    // int32的整形数+字符串(字符串是"android.os.IServiceManager")
    data.writeInterfaceToken(IServiceManager::getInterfaceDescriptor());
    // 服务的名称,即"media.player"
    data.writeString16(name);
    remote()->transact(CHECK_SERVICE_TRANSACTION, data, &reply);
    return reply.readStrongBinder();
}
  1. 封装数据
  2. 实际发起请求

1 封装数据

1.1 Parcel::writeInterfaceToken

data.writeInterfaceToken(IServiceManager::getInterfaceDescriptor()) 中 getInterfaceDescriptor返回"android.os.IServiceManager"

即:data.writeInterfaceToken("android.os.IServiceManager")

c++ 复制代码
status_t Parcel::writeInterfaceToken(const String16& interface)
{
    // int32的整形数
    writeInt32(IPCThreadState::self()->getStrictModePolicy() |
               STRICT_MODE_PENALTY_GATHER);
    // 字符串("android.os.IServiceManager")
    return writeString16(interface);
}
  • IPCThreadState::getStrictModePolicy(),返回的是mStrictModePolicy,其初始值是0。writeInt32的调用可以简化为 writeInt32(STRICT_MODE_PENALTY_GATHER)。
  • writeString16(interface)即writeString16("android.os.IServiceManager")。

这里写的这两个数据的作用:在ServiceManager中收到数据后,需要根据数据头来判断数据的有效性。这两个数据就是数据头,用作有效性判断的。

1.2 data.writeString16(name)

data.writeString16(name)将MediaPlayerService服务的名称写入到data中,入参name="media.player",写入parcel。

2 启动传输

2.1 BpBinder::transact()

c++ 复制代码
status_t BpBinder::transact(
    uint32_t code, const Parcel& data, Parcel* reply, uint32_t flags)
{
    // code: CHECK_SERVICE_TRANSACTION
    //初始值为1,在BpBinder构造的时候被赋值
    if (mAlive) {
        status_t status = IPCThreadState::self()->transact(
            mHandle, code, data, reply, flags);
        if (status == DEAD_OBJECT) mAlive = 0;
        return status;
    }

    return DEAD_OBJECT;
}

mAlive的初始值为1,会调用IPCThreadState::self()->transact()。

2.2 IPCThreadState::transact():

BpBinder::transact() -> IPCThreadState::transact()

c++ 复制代码
status_t IPCThreadState::transact(int32_t handle,
                                  uint32_t code, const Parcel& data,
                                  Parcel* reply, uint32_t flags)
{
    status_t err = data.errorCheck();

    flags |= TF_ACCEPT_FDS;

    ...
    
    if (err == NO_ERROR) {
        err = writeTransactionData(BC_TRANSACTION, flags, handle, code, data, NULL);
    }
    
    if (err != NO_ERROR) {
        if (reply) reply->setError(err);
        return (mLastError = err);
    }
    
    if ((flags & TF_ONE_WAY) == 0) {
        ...
        if (reply) {
            err = waitForResponse(reply);
        } else {
            Parcel fakeReply;
            err = waitForResponse(&fakeReply);
        }
        ...
    } else {
        ...
    }
    
    return err;
}
  • 函数入参:

    handle:请求目标句柄,BpBinder的mHandle对象,这里是ServiceManager的代理调用,BpBinder中的mHandle是ServiceManager的句柄,值为0。这个在前面ServiceManager的获取篇中有介绍,创建BpBinder对象时赋值。

    code:CHECK_SERVICE_TRANSACTION

    data:addService中设置的Parcel对象

    reply:用来接收Binder驱动反馈数据的Parcel对象

    flags:是默认值0。

  • 通过writeTransactionData()将数据打包

  • 本例非异步。上面的数据打包完成后,就要调用waitForResponse()将数据发送给Binder驱动,然后等待Binder驱动反馈。

2.2.1 writeTransactionData()

BpBinder::transact() -> IPCThreadState::transact() -> IPCThreadState::writeTransactionData()

读出前面打包到Parcel中的数据,然后将其打包到binder驱动可识别的 binder_transaction_data 结构体tr中,并将CHECK_SERVICE_TRANSACTION命令和结构体tr打包到Parcel mOut中。

c++ 复制代码
status_t IPCThreadState::writeTransactionData(int32_t cmd, uint32_t binderFlags,
    int32_t handle, uint32_t code, const Parcel& data, status_t* statusBuffer)
{
    // cmd: BC_TRANSACTION code: CHECK_SERVICE_TRANSACTION
    binder_transaction_data tr;

    tr.target.ptr = 0; 
    tr.target.handle = handle;
    tr.code = code; // code: CHECK_SERVICE_TRANSACTION
    tr.flags = binderFlags;
    tr.cookie = 0;
    tr.sender_pid = 0;
    tr.sender_euid = 0;
    
    const status_t err = data.errorCheck();
    if (err == NO_ERROR) {
        tr.data_size = data.ipcDataSize();
        tr.data.ptr.buffer = data.ipcData();
        tr.offsets_size = data.ipcObjectsCount()*sizeof(binder_size_t);
        tr.data.ptr.offsets = data.ipcObjects();
    } else if (statusBuffer) {
		...
    } else {
        ...
    }
    
    mOut.writeInt32(cmd); // cmd: BC_TRANSACTION
    mOut.write(&tr, sizeof(tr));
    
    return NO_ERROR;
}
  • ipcDataSize():返回mDataSize:是mData内容的长度,同时也用作指向数据末尾,指示后续写入数据的起始位置
  • ipcData():返回mData:是flat_binder_object的起始地址
  • ipcObjectsCount():返回mObjectsSize:保存对象的总长度,值是对象个数*sizeof(binder_int_t)
  • ipcObjects:返回mObjects:用来保存写入对象的地址的数组

2.2.2 waitForResponse

C++ 复制代码
status_t IPCThreadState::waitForResponse(Parcel *reply, status_t *acquireResult)
{
    uint32_t cmd;
    int32_t err;

    while (1) {
        if ((err=talkWithDriver()) < NO_ERROR) break;
        ...

- >talkWithDriver

C++ 复制代码
status_t IPCThreadState::talkWithDriver(bool doReceive)
{
    ...
    
    binder_write_read bwr;
    
    const bool needRead = mIn.dataPosition() >= mIn.dataSize();
    
    const size_t outAvail = (!doReceive || needRead) ? mOut.dataSize() : 0;
    
    bwr.write_size = outAvail;
    bwr.write_buffer = (uintptr_t)mOut.data();

    if (doReceive && needRead) {
        bwr.read_size = mIn.dataCapacity();
        bwr.read_buffer = (uintptr_t)mIn.data();
    } else {
        bwr.read_size = 0;
        bwr.read_buffer = 0;
    }

    ...
    
    bwr.write_consumed = 0;
    bwr.read_consumed = 0;
    status_t err;
    do {
        ...
#if defined(HAVE_ANDROID_OS)
        // 写驱动
        if (ioctl(mProcess->mDriverFD, BINDER_WRITE_READ, &bwr) >= 0)
            err = NO_ERROR;
        else
            err = -errno;
#else
        err = INVALID_OPERATION;
#endif
        if (mProcess->mDriverFD <= 0) {
            err = -EBADF;
        }
        IF_LOG_COMMANDS() {
            alog << "Finished read/write, write size = " << mOut.dataSize() << endl;
        }
    } while (err == -EINTR);

    ...

把parcel数据赋值到结构体对象binder_write_read bwr,这是驱动可识别的结构体。

C++ 复制代码
bwr.write_size = outAvail;                          // mOut中数据大小,即要传输的数据大小,大于0
bwr.write_buffer = (long unsigned int)mOut.data();  // mOut中要传输的数据的首地址
bwr.write_consumed = 0;
bwr.read_size = mIn.dataCapacity();                 // 256
bwr.read_buffer = (long unsigned int)mIn.data();    // mIn.mData,当前为空
bwr.read_consumed = 0;

写驱动(kernel)ioctl- BINDER_WRITE_READ

binder_ioctl

C 复制代码
static long binder_ioctl(struct file *filp, unsigned int cmd, unsigned long arg)
{
    int ret;
    // client端 proc
    struct binder_proc *proc = filp->private_data;
    struct binder_thread *thread;
    unsigned int size = _IOC_SIZE(cmd);
    void __user *ubuf = (void __user *)arg;

    ret = wait_event_interruptible(binder_user_error_wait, binder_stop_on_user_error < 2);
    if (ret)
        goto err_unlocked;

    binder_lock(__func__);
    // client端 thread
    thread = binder_get_thread(proc);
    ...

    switch (cmd) {
        case BINDER_WRITE_READ:
            ret = binder_ioctl_write_read(filp, cmd, arg, thread);
            if (ret)
                goto err;
            break;
    ...
}

binder_ioctl_write_read

C 复制代码
static int binder_ioctl_write_read(struct file *filp,
                unsigned int cmd, unsigned long arg,
                struct binder_thread *thread)
{
    int ret = 0;
    struct binder_proc *proc = filp->private_data;
    unsigned int size = _IOC_SIZE(cmd);
    void __user *ubuf = (void __user *)arg;
    struct binder_write_read bwr;

    ...
    if (copy_from_user(&bwr, ubuf, sizeof(bwr))) {
        ret = -EFAULT;
        goto out;
    }
    ...
    if (bwr.write_size > 0) {
        ret = binder_thread_write(proc, thread,
                      bwr.write_buffer,
                      bwr.write_size,
                      &bwr.write_consumed);
        ...
    }
    if (bwr.read_size > 0) {
        ret = binder_thread_read(proc, thread, bwr.read_buffer,
                     bwr.read_size,
                     &bwr.read_consumed,
                     filp->f_flags & O_NONBLOCK);
        ...
    }
    ...
    if (copy_to_user(ubuf, &bwr, sizeof(bwr))) {
        ret = -EFAULT;
        goto out;
    }
out:
    return ret;
}

binder_thread_write

C 复制代码
static int binder_thread_write(struct binder_proc *proc,
            struct binder_thread *thread,
            binder_uintptr_t binder_buffer, size_t size,
            binder_size_t *consumed)
{
    uint32_t cmd;
    void __user *buffer = (void __user *)(uintptr_t)binder_buffer;
    void __user *ptr = buffer + *consumed;
    void __user *end = buffer + size;

    while (ptr < end && thread->return_error == BR_OK) {
        if (get_user(cmd, (uint32_t __user *)ptr))
            return -EFAULT;
        ptr += sizeof(uint32_t);
        ...
        switch (cmd) {
            ...
            case BC_TRANSACTION:
            case BC_REPLY: {
                struct binder_transaction_data tr;

                if (copy_from_user(&tr, ptr, sizeof(tr)))
                    return -EFAULT;
                ptr += sizeof(tr);
                binder_transaction(proc, thread, &tr, cmd == BC_REPLY);
                break;
            }
            ...
        }
        *consumed = ptr - buffer;
    }
    return 0;
}

binder_transaction

C 复制代码
static void binder_transaction(struct binder_proc *proc,
                   struct binder_thread *thread,
                   struct binder_transaction_data *tr, int reply)
{
    struct binder_transaction *t;
    struct binder_work *tcomplete;
    binder_size_t *offp, *off_end;
    binder_size_t off_min;
    struct binder_proc *target_proc;
    struct binder_thread *target_thread = NULL;
    struct binder_node *target_node = NULL;
    struct list_head *target_list;
    wait_queue_head_t *target_wait;
    struct binder_transaction *in_reply_to = NULL;
    struct binder_transaction_log_entry *e;
    uint32_t return_error;
    ...
    // ---第一部分:获取对端信息,包括对端binder实体、binder_proc等---
    if (reply) {
        ...
    } else {
        if (tr->target.handle) {
            ...
        } else {
            target_node = binder_context_mgr_node;
            ...
        }
        target_proc = target_node->proc;
        ...
    }
    if (target_thread) {
        ...
    } else {
        target_list = &target_proc->todo;
        target_wait = &target_proc->wait;
    }
    // ---第二部分:创建两个事务并填充相关数据,分别给对端和自己处理---
    t = kzalloc(sizeof(*t), GFP_KERNEL);
    ...
    binder_stats_created(BINDER_STAT_TRANSACTION);

    tcomplete = kzalloc(sizeof(*tcomplete), GFP_KERNEL);
    ...
    binder_stats_created(BINDER_STAT_TRANSACTION_COMPLETE);

    ...
    // 记录事务t的发起者
    if (!reply && !(tr->flags & TF_ONE_WAY))
        t->from = thread;
    else
        t->from = NULL;
    // 以下把一些数据填充到事务t中
    t->sender_euid = task_euid(proc->tsk);
    t->to_proc = target_proc;
    t->to_thread = target_thread;
    t->code = tr->code;
    t->flags = tr->flags;
    t->priority = task_nice(current);

    t->buffer = binder_alloc_buf(target_proc, tr->data_size,
        tr->offsets_size, !reply && (t->flags & TF_ONE_WAY));
    ...
    t->buffer->allow_user_free = 0;
    t->buffer->debug_id = t->debug_id;
    t->buffer->transaction = t;
    t->buffer->target_node = target_node;
    if (target_node)
        binder_inc_node(target_node, 1, 0, NULL);

    offp = (binder_size_t *)(t->buffer->data +
                 ALIGN(tr->data_size, sizeof(void *)));
    // 用户空间的数据拷贝到内核
    if (copy_from_user(t->buffer->data, (const void __user *)(uintptr_t)
               tr->data.ptr.buffer, tr->data_size)) {
        ...
    }
    if (copy_from_user(offp, (const void __user *)(uintptr_t)
               tr->data.ptr.offsets, tr->offsets_size)) {
        ...
    }
    ...
    off_end = (void *)offp + tr->offsets_size;
    off_min = 0;
    // 遍历数据对象,调用getService并没有传输对象,这里没有数据可取
    for (; offp < off_end; offp++) {
        ...
    }
    if (reply) {
        ...
    } else if (!(t->flags & TF_ONE_WAY)) {
        BUG_ON(t->buffer->async_transaction != 0);
        t->need_reply = 1;
        t->from_parent = thread->transaction_stack;
        // 将当前事务t添加到当前线程的事务栈中
        thread->transaction_stack = t;
    } else {
        ...
    }
    // ---第三部分:事务组织完毕,放入对应的队列,并唤醒对端处理---
    t->work.type = BINDER_WORK_TRANSACTION;
    list_add_tail(&t->work.entry, target_list);
    tcomplete->type = BINDER_WORK_TRANSACTION_COMPLETE;
    list_add_tail(&tcomplete->entry, &thread->todo);
    // 唤醒目标进程
    if (target_wait)
        wake_up_interruptible(target_wait);
    return;
    ...
}

1 获得对端的binder实体,请求的servicemanager,binder实体就是全局变量binder_context_mgr_node。通过对端binder实体可以获取到对端binder_proc、binder_thread。

2 生成事务 t和tcomplete,t是对端要处理的任务BINDER_WORK_TRANSACTION承载的是client发出的请求,tcomplete是本端要处理的任务BINDER_WORK_TRANSACTION_COMPLETE表示发送任务完毕。

3 wake_up_interruptible(target_wait) 唤醒对端

binder_thread_read

C 复制代码
static int binder_thread_read(struct binder_proc *proc,
                  struct binder_thread *thread,
                  binder_uintptr_t binder_buffer, size_t size,
                  binder_size_t *consumed, int non_block)
{
    void __user *buffer = (void __user *)(uintptr_t)binder_buffer;
    void __user *ptr = buffer + *consumed;
    void __user *end = buffer + size;

    int ret = 0;
    int wait_for_proc_work;

    if (*consumed == 0) {
        if (put_user(BR_NOOP, (uint32_t __user *)ptr))
            return -EFAULT;
        ptr += sizeof(uint32_t);
    }

    ...

    thread->looper |= BINDER_LOOPER_STATE_WAITING;
    if (wait_for_proc_work)
        proc->ready_threads++;

    binder_unlock(__func__);

    ... // --->>> 唤醒后从这里开始

    binder_lock(__func__);

    if (wait_for_proc_work)
        proc->ready_threads--;
    thread->looper &= ~BINDER_LOOPER_STATE_WAITING;

    if (ret)
        return ret;

    while (1) {
        uint32_t cmd;
        struct binder_transaction_data tr;
        struct binder_work *w;
        struct binder_transaction *t = NULL;
        // 取代办事务。
        if (!list_empty(&thread->todo)) {
            w = list_first_entry(&thread->todo, struct binder_work,
                         entry);
        } else if (!list_empty(&proc->todo) && wait_for_proc_work) {
            w = list_first_entry(&proc->todo, struct binder_work,
                         entry);
        } else {
            ...
        }

        // 判断数据是否读完,没有数据就在这里跳出循环;
        if (end - ptr < sizeof(tr) + 4)
            break;

        switch (w->type) {
            ...
            // 命令传递给用户空间
            case BINDER_WORK_TRANSACTION_COMPLETE: {
                cmd = BR_TRANSACTION_COMPLETE;
                if (put_user(cmd, (uint32_t __user *)ptr))
                    return -EFAULT;
                ptr += sizeof(uint32_t);

                binder_stat_br(proc, thread, cmd);
                ...

                list_del(&w->entry);
                kfree(w);
                binder_stats_deleted(BINDER_STAT_TRANSACTION_COMPLETE);
            } break;
            ...
        }

        // t没有值,BINDER_WORK_TRANSACTION_COMPLETE不会给t赋值,continue;
        if (!t)
            continue;
        ...
    }

done:
    // bwr.read_consumed
    *consumed = ptr - buffer;
    ...
    return 0;
}

把BR_TRANSACTION_COMPLETE传递给用户空间处理。

回到用户空间(user)

IPCThreadState::talkWithDriver()

C++ 复制代码
status_t IPCThreadState::talkWithDriver(bool doReceive)
{
    ...    
    binder_write_read bwr;
    
    const bool needRead = mIn.dataPosition() >= mIn.dataSize();
    const size_t outAvail = (!doReceive || needRead) ? mOut.dataSize() : 0;
    
    bwr.write_size = outAvail;
    bwr.write_buffer = (uintptr_t)mOut.data();

    if (doReceive && needRead) {
        bwr.read_size = mIn.dataCapacity();
        bwr.read_buffer = (uintptr_t)mIn.data();
    } else {
        bwr.read_size = 0;
        bwr.read_buffer = 0;
    }

    ...

    bwr.write_consumed = 0;
    bwr.read_consumed = 0;
    status_t err;
    do {
        ...
        if (ioctl(mProcess->mDriverFD, BINDER_WRITE_READ, &bwr) >= 0)
            err = NO_ERROR;
        else
            err = -errno;
        // ---> 驱动完成后跳出这里
        ...
    } while (err == -EINTR);
    ...
    if (err >= NO_ERROR) {
        //清空已写的数据,表示发送的数据驱动已经都接收了
        if (bwr.write_consumed > 0) {
            if (bwr.write_consumed < mOut.dataSize())
                mOut.remove(0, bwr.write_consumed);
            else
                mOut.setDataSize(0);
        }
        //根据已读数设置mIn坐标
        if (bwr.read_consumed > 0) {
            mIn.setDataSize(bwr.read_consumed);
            mIn.setDataPosition(0);
        }
        ...
        return NO_ERROR;
    }
    return err;
}

waitForResponse

C++ 复制代码
status_t IPCThreadState::waitForResponse(Parcel *reply, status_t *acquireResult)
{
    uint32_t cmd;
    int32_t err;

    while (1) {
        // talkWithDriver执行完毕后继续,循环取出反馈的命令BR_NOOP和BR_TRANSACTION_COMPLETE
        if ((err=talkWithDriver()) < NO_ERROR) break;
        err = mIn.errorCheck();
        if (err < NO_ERROR) break;
        if (mIn.dataAvail() == 0) continue;
        
        cmd = (uint32_t)mIn.readInt32();
        ...
        // BR_NOOP和BR_TRANSACTION_COMPLETE 两个命令都没有什么实质性动作,略
        switch (cmd) {
            case BR_TRANSACTION_COMPLETE:
                if (!reply && !acquireResult) goto finish;
                break;
            ...
        }
    }
    ...
    
    return err;
}

从驱动带回来两个命令 BR_NOOP和BR_TRANSACTION_COMPLETE,循环取出处理,两个命令没有什么实质性处理。处理完命令再次进入循环进入到talkWithDriver,此时写write_size = 0,read_size = mIn.dataCapacity(),再次进入驱动读取。BR_TRANSACTION_COMPLETE的处理与addService文章一致。

进入驱动(kernel)- 等待

waitForResponse -> talkWithDriver -> ioctl -> binder_ioctl -> binder_ioctl_write_read -> binder_thread_read

由于 write_size = 0,read_size = mIn.dataCapacity(),不会执行binder_thread_write,直接进入binder_thread_read。

这里已经没有事务处理了,binder_thread_read 进入等待。

进入对端流程(kernel)

被唤醒

C 复制代码
static int binder_thread_read(struct binder_proc *proc,
                  struct binder_thread *thread,
                  binder_uintptr_t binder_buffer, size_t size,
                  binder_size_t *consumed, int non_block)
{
    void __user *buffer = (void __user *)(uintptr_t)binder_buffer;
    void __user *ptr = buffer + *consumed;
    void __user *end = buffer + size;

    int ret = 0;
    int wait_for_proc_work;

    ... // 从这里唤醒

    binder_lock(__func__);

    if (wait_for_proc_work)
        proc->ready_threads--;
    thread->looper &= ~BINDER_LOOPER_STATE_WAITING;

    ...
    while (1) {
        uint32_t cmd;
        struct binder_transaction_data tr;
        struct binder_work *w;
        struct binder_transaction *t = NULL;

        // 取出待完成工作
        if (!list_empty(&thread->todo)) {
            w = list_first_entry(&thread->todo, struct binder_work,
                         entry);
        } else if (!list_empty(&proc->todo) && wait_for_proc_work) {
            ...
        } else {
            ...
        }

        if (end - ptr < sizeof(tr) + 4)
            break;

        switch (w->type) {
            case BINDER_WORK_TRANSACTION: {
                t = container_of(w, struct binder_transaction, work);
            } break;
            ...
        }

        if (!t)
            continue;

        if (t->buffer->target_node) {
            // 前面client端创建代办事务时有设置 t->buffer->target_node = target_node
            // 就是servicemanager的binder实体
            struct binder_node *target_node = t->buffer->target_node;
            // ServiceManager的ptr为NULL
            tr.target.ptr = target_node->ptr;
            // ServiceManager的cookie为NULL
            tr.cookie =  target_node->cookie;
            t->saved_priority = task_nice(current);
            if (t->priority < target_node->min_priority &&
                !(t->flags & TF_ONE_WAY))
                binder_set_nice(t->priority);
            else if (!(t->flags & TF_ONE_WAY) ||
                 t->saved_priority > target_node->min_priority)
                binder_set_nice(target_node->min_priority);
            cmd = BR_TRANSACTION;
        } else {
            ...
        }
        tr.code = t->code;
        tr.flags = t->flags;
        tr.sender_euid = from_kuid(current_user_ns(), t->sender_euid);

        if (t->from) {
            struct task_struct *sender = t->from->proc->tsk;

            tr.sender_pid = task_tgid_nr_ns(sender,
                            task_active_pid_ns(current));
        } else {
            tr.sender_pid = 0;
        }

        // 数据大小
        tr.data_size = t->buffer->data_size;
        // 数据中对象的偏移数组的大小(即对象的个数)
        tr.offsets_size = t->buffer->offsets_size;
        // 数据
        tr.data.ptr.buffer = (binder_uintptr_t)(
                    (uintptr_t)t->buffer->data +
                    proc->user_buffer_offset);
        //数据中对象的偏移数组
        tr.data.ptr.offsets = tr.data.ptr.buffer +
                    ALIGN(t->buffer->data_size,
                        sizeof(void *));
        //将cmd指令写入到ptr,即传递到用户空
        if (put_user(cmd, (uint32_t __user *)ptr))
            return -EFAULT;
        ptr += sizeof(uint32_t);
        //将tr数据拷贝到用户空间
        if (copy_to_user(ptr, &tr, sizeof(tr)))
            return -EFAULT;
        ptr += sizeof(tr);

        ...
        //删除已处理的事务
        list_del(&t->work.entry);
        t->buffer->allow_user_free = 1;
        //设置回复信息
        if (cmd == BR_TRANSACTION && !(t->flags & TF_ONE_WAY)) {
            //该事务会发送给ServiceManager守护进程处理
            //ServiceManager处理之后,还需要给Binder驱动回复处理结果
            //这里设置Binder驱动回复信息
            t->to_parent = thread->transaction_stack;
            // to_thread表示Service Manager反馈后,将反馈结果交给当前thread进行处理
            t->to_thread = thread;
            // transaction_stack交易栈保存当前事务。用来记录反馈是针对哪个事务的。
            thread->transaction_stack = t;
        } else {
            ...
        }
        break;
    }

done:
    //更新bwr.read_sonsumed的值
    *consumed = ptr - buffer;
    ...
    return 0;
}

1 处理事务BINDER_WORK_TRANSACTION,获取事务内容binder_transaction对象,并把其中的内容封装到binder_transaction_data结构体,然后向用户空间拷贝命令BR_TRANSACTION和binder_transaction_data数据。

2 更新consumed即bwr.read_consumed的值

执行完毕后跳出内核空间进入到ServiceManager的用户空间

进入ServiceManager用户空间(user)

binder_loop

C++ 复制代码
void binder_loop(struct binder_state *bs, binder_handler func)
{
    int res;
    struct binder_write_read bwr;
    uint32_t readbuf[32];

    bwr.write_size = 0;
    bwr.write_consumed = 0;
    bwr.write_buffer = 0;

    readbuf[0] = BC_ENTER_LOOPER;
    binder_write(bs, readbuf, sizeof(uint32_t));

    for (;;) {
        bwr.read_size = sizeof(readbuf);
        bwr.read_consumed = 0;
        bwr.read_buffer = (uintptr_t) readbuf;

        res = ioctl(bs->fd, BINDER_WRITE_READ, &bwr);
        // ---> 驱动执行完毕后来到这里
        ...

        res = binder_parse(bs, 0, (uintptr_t) readbuf, bwr.read_consumed, func);
        ...
    }
}

binder_parse

C++ 复制代码
int binder_parse(struct binder_state *bs, struct binder_io *bio,
                 uintptr_t ptr, size_t size, binder_handler func)
{
    int r = 1;
    uintptr_t end = ptr + (uintptr_t) size;

    while (ptr < end) {
        uint32_t cmd = *(uint32_t *) ptr;
        ptr += sizeof(uint32_t);
        switch(cmd) {
            ...
            case BR_TRANSACTION: {
                struct binder_transaction_data *txn = (struct binder_transaction_data *) ptr;
                ...
                if (func) {
                    unsigned rdata[256/4];
                    // binder驱动传递的数据
                    struct binder_io msg;
                    // 需要通过驱动反馈回client端的数据
                    struct binder_io reply;
                    int res;

                    // 初始化replay
                    bio_init(&reply, rdata, sizeof(rdata), 4);
                    // 根据binder驱动传递的数据初始化msg
                    bio_init_from_txn(&msg, txn);
                    // svcmgr_handler
                    res = func(bs, txn, &msg, &reply);
                    // 反馈数据给binder驱动
                    binder_send_reply(bs, &reply, txn->data.ptr.buffer, res);
                }
                ptr += sizeof(*txn);
                break;
            }
            ...
        }
    }

    return r;
}

处理BR_TRANSACTION,将驱动传递的数据赋值给msg,并创建一个reply用来承载待反馈数据,然后调用svcmgr_handler来具体处理传入数据和组织反馈数据

svcmgr_handler

C++ 复制代码
int svcmgr_handler(struct binder_state *bs,
                   struct binder_transaction_data *txn,
                   struct binder_io *msg,
                   struct binder_io *reply)
{
    struct svcinfo *si;
    uint16_t *s;
    size_t len;
    uint32_t handle;
    uint32_t strict_policy;
    int allow_isolated;
    ...

    switch(txn->code) {
        case SVC_MGR_GET_SERVICE:
        case SVC_MGR_CHECK_SERVICE:
            // 服务名
            s = bio_get_string16(msg, &len);
            ...
            handle = do_find_service(bs, s, len, txn->sender_euid, txn->sender_pid);
            if (!handle)
                break;
            bio_put_ref(reply, handle);
            return 0;
        ...
    }

    bio_put_uint32(reply, 0);
    return 0;
}

获取服务名,调用do_find_service通过服务名查找

do_find_service

C++ 复制代码
uint32_t do_find_service(struct binder_state *bs, const uint16_t *s, size_t len, uid_t uid, pid_t spid)
{
    struct svcinfo *si = find_svc(s, len);

    if (!si || !si->handle) {
        return 0;
    }

    if (!si->allow_isolated) {
        uid_t appid = uid % AID_USER;
        if (appid >= AID_ISOLATED_START && appid <= AID_ISOLATED_END) {
            return 0;
        }
    }

    if (!svc_can_find(s, len, spid)) {
        return 0;
    }

    return si->handle;
}

struct svcinfo *find_svc(const uint16_t *s16, size_t len)
{
    struct svcinfo *si;

    for (si = svclist; si; si = si->next) {
        if ((len == si->len) &&
            !memcmp(s16, si->name, len * sizeof(uint16_t))) {
            return si;
        }
    }
    return NULL;
}

通过服务名从svclist遍历查找到描述服务的svcinfo,返回其中的handle(引用描述符)

bio_put_ref

C++ 复制代码
void bio_put_ref(struct binder_io *bio, uint32_t handle)
{
    struct flat_binder_object *obj;

    if (handle)
        obj = bio_alloc_obj(bio);
    else
        obj = bio_alloc(bio, sizeof(*obj));

    if (!obj)
        return;

    obj->flags = 0x7f | FLAT_BINDER_FLAG_ACCEPTS_FDS;
    obj->type = BINDER_TYPE_HANDLE;
    obj->handle = handle;
    obj->cookie = 0;
}

组织flat_binder_object结构体数据,把type为BINDER_TYPE_HANDLE,handle为引用描述符

binder_send_reply

C++ 复制代码
void binder_send_reply(struct binder_state *bs,
                       struct binder_io *reply,
                       binder_uintptr_t buffer_to_free,
                       int status)
{
    struct {
        uint32_t cmd_free;
        binder_uintptr_t buffer;
        uint32_t cmd_reply;
        struct binder_transaction_data txn;
    } __attribute__((packed)) data;

    data.cmd_free = BC_FREE_BUFFER;
    data.buffer = buffer_to_free;
    data.cmd_reply = BC_REPLY;
    data.txn.target.ptr = 0;
    data.txn.cookie = 0;
    data.txn.code = 0;
    if (status) {
        ...
    } else {
        data.txn.flags = 0;
        data.txn.data_size = reply->data - reply->data0;
        data.txn.offsets_size = ((char*) reply->offs) - ((char*) reply->offs0);
        data.txn.data.ptr.buffer = (uintptr_t)reply->data0;
        data.txn.data.ptr.offsets = (uintptr_t)reply->offs0;
    }
    binder_write(bs, &data, sizeof(data));
}

把数据组织到结构体data,包括BC_FREE_BUFFER、BC_REPLY两个命令,然后调用binder_write请求驱动

C++ 复制代码
int binder_write(struct binder_state *bs, void *data, size_t len)
{
    struct binder_write_read bwr;
    int res;

    bwr.write_size = len;
    bwr.write_consumed = 0;
    bwr.write_buffer = (uintptr_t) data;
    bwr.read_size = 0;
    bwr.read_consumed = 0;
    bwr.read_buffer = 0;
    res = ioctl(bs->fd, BINDER_WRITE_READ, &bwr);
    if (res < 0) {
        fprintf(stderr,"binder_write: ioctl failed (%s)\n",
                strerror(errno));
    }
    return res;
}

binder_write 调用ioctl到驱动,根据write_size = len,read_size = 0,驱动中只执行binder_thread_write

进入驱动(kernel)

binder_thread_write

binder_write -> binder_ioctl -> binder_ioctl_write_read -> binder_thread_write

命令:BC_FREE_BUFFER、BC_REPLY

C 复制代码
static int binder_thread_write(struct binder_proc *proc,
            struct binder_thread *thread,
            binder_uintptr_t binder_buffer, size_t size,
            binder_size_t *consumed)
{
    uint32_t cmd;
    void __user *buffer = (void __user *)(uintptr_t)binder_buffer;
    void __user *ptr = buffer + *consumed;
    void __user *end = buffer + size;

    while (ptr < end && thread->return_error == BR_OK) {
        if (get_user(cmd, (uint32_t __user *)ptr))
            return -EFAULT;
        ptr += sizeof(uint32_t);
        ...
        switch (cmd) {
            ...
            case BC_FREE_BUFFER: {
                binder_uintptr_t data_ptr;
                struct binder_buffer *buffer;

                if (get_user(data_ptr, (binder_uintptr_t __user *)ptr))
                    return -EFAULT;
                ptr += sizeof(binder_uintptr_t);

                buffer = binder_buffer_lookup(proc, data_ptr);
                ...

                if (buffer->transaction) {
                    buffer->transaction->buffer = NULL;
                    buffer->transaction = NULL;
                }
                if (buffer->async_transaction && buffer->target_node) {
                    BUG_ON(!buffer->target_node->has_async_transaction);
                    if (list_empty(&buffer->target_node->async_todo))
                        buffer->target_node->has_async_transaction = 0;
                    else
                        list_move_tail(buffer->target_node->async_todo.next, &thread->todo);
                }
                trace_binder_transaction_buffer_release(buffer);
                binder_transaction_buffer_release(proc, buffer, NULL);
                binder_free_buf(proc, buffer);
                break;
            }

            case BC_TRANSACTION:
            case BC_REPLY: {
                struct binder_transaction_data tr;
                // 拷贝用户空间数据
                if (copy_from_user(&tr, ptr, sizeof(tr)))
                    return -EFAULT;
                // 移动数据指针
                ptr += sizeof(tr);
                // 处理数据
                binder_transaction(proc, thread, &tr, cmd == BC_REPLY);
                break;
            }
            ...
        }
        *consumed = ptr - buffer;
    }
    return 0;
}

直接看BC_REPLY,把用户空间数据拷贝进来,调用binder_transaction做处理。

binder_transaction

binder_write -> binder_ioctl -> binder_ioctl_write_read -> binder_thread_write -> binder_transaction

C 复制代码
static void binder_transaction(struct binder_proc *proc,
                   struct binder_thread *thread,
                   struct binder_transaction_data *tr, int reply)
{
    struct binder_transaction *t;
    struct binder_work *tcomplete;
    binder_size_t *offp, *off_end;
    binder_size_t off_min;
    struct binder_proc *target_proc;
    struct binder_thread *target_thread = NULL;
    struct binder_node *target_node = NULL;
    struct list_head *target_list;
    wait_queue_head_t *target_wait;
    struct binder_transaction *in_reply_to = NULL;
    struct binder_transaction_log_entry *e;
    uint32_t return_error;

    e = binder_transaction_log_add(&binder_transaction_log);
    e->call_type = reply ? 2 : !!(tr->flags & TF_ONE_WAY);
    e->from_proc = proc->pid;
    e->from_thread = thread->pid;
    e->target_handle = tr->target.handle;
    e->data_size = tr->data_size;
    e->offsets_size = tr->offsets_size;
    // 第一部分:获取对端信息
    if (reply) {
        // 1 从任务栈获取对端。
        // 从任务栈取任务
        in_reply_to = thread->transaction_stack;
        if (in_reply_to == NULL) {
            ...
        }
        // 设置优先级
        binder_set_nice(in_reply_to->saved_priority);
        if (in_reply_to->to_thread != thread) {
            ...
        }
        thread->transaction_stack = in_reply_to->to_parent;
        // 根据任务取得对端binder_thread,即发起请求者
        target_thread = in_reply_to->from;
        if (target_thread == NULL) {
            ...
        }
        if (target_thread->transaction_stack != in_reply_to) {
            ...
        }
        // 获取对端binder_proc
        target_proc = target_thread->proc;
    } else {
        ...
    }
    if (target_thread) {
        e->to_thread = target_thread->pid;
        target_list = &target_thread->todo;
        target_wait = &target_thread->wait;
    } else {
        target_list = &target_proc->todo;
        target_wait = &target_proc->wait;
    }
    e->to_proc = target_proc->pid;
    // 第二部分:创建两个事务t和tcomplete,是分别给对端和自己处理的。
    t = kzalloc(sizeof(*t), GFP_KERNEL);
    ...
    binder_stats_created(BINDER_STAT_TRANSACTION);

    tcomplete = kzalloc(sizeof(*tcomplete), GFP_KERNEL);
    ...
    binder_stats_created(BINDER_STAT_TRANSACTION_COMPLETE);

    t->debug_id = ++binder_last_id;
    e->debug_id = t->debug_id;

    ...

    if (!reply && !(tr->flags & TF_ONE_WAY))
        t->from = thread;
    else
        t->from = NULL; // reply,走这里
    t->sender_euid = task_euid(proc->tsk);
    t->to_proc = target_proc;
    t->to_thread = target_thread;
    t->code = tr->code;
    t->flags = tr->flags;
    t->priority = task_nice(current);

    t->buffer = binder_alloc_buf(target_proc, tr->data_size,
        tr->offsets_size, !reply && (t->flags & TF_ONE_WAY));
    ...
    t->buffer->allow_user_free = 0;
    t->buffer->debug_id = t->debug_id;
    t->buffer->transaction = t;
    t->buffer->target_node = target_node;
    if (target_node)
        binder_inc_node(target_node, 1, 0, NULL);

    offp = (binder_size_t *)(t->buffer->data +
                 ALIGN(tr->data_size, sizeof(void *)));

    if (copy_from_user(t->buffer->data, (const void __user *)(uintptr_t)
               tr->data.ptr.buffer, tr->data_size)) {
        ...
    }
    if (copy_from_user(offp, (const void __user *)(uintptr_t)
               tr->data.ptr.offsets, tr->offsets_size)) {
        ...
    }
    ...
    off_end = (void *)offp + tr->offsets_size;
    off_min = 0;
    for (; offp < off_end; offp++) {
        struct flat_binder_object *fp;

        ...
        fp = (struct flat_binder_object *)(t->buffer->data + *offp);
        off_min = *offp + sizeof(struct flat_binder_object);
        switch (fp->type) {
            ...
            case BINDER_TYPE_HANDLE:
            case BINDER_TYPE_WEAK_HANDLE: {
                // 获取服务引用,这个是在addService时放入SM的proc中的,用handle做查找。
                struct binder_ref *ref = binder_get_ref(proc, fp->handle);
                ...
                if (security_binder_transfer_binder(proc->tsk, target_proc->tsk)) {
                    return_error = BR_FAILED_REPLY;
                    goto err_binder_get_ref_failed;
                }
                // ref->node->proc 是要获取的服务(media),target_proc是client端,不相等
                if (ref->node->proc == target_proc) {
                    ...
                } else {
                    struct binder_ref *new_ref;
                    
                    new_ref = binder_get_ref_for_node(target_proc, ref->node);
                    ...
                    fp->handle = new_ref->desc;
                    binder_inc_ref(new_ref, fp->type == BINDER_TYPE_HANDLE, NULL);
                    ...
                }
            } break;

            ...
        }
    }
    if (reply) {
        binder_pop_transaction(target_thread, in_reply_to);
    } else if (!(t->flags & TF_ONE_WAY)) {
        ...
    } else {
        ...
    }
    // 第三部分:事务组织完毕,把事务放入对应的队列,并唤醒对端。
    t->work.type = BINDER_WORK_TRANSACTION;
    list_add_tail(&t->work.entry, target_list);
    tcomplete->type = BINDER_WORK_TRANSACTION_COMPLETE;
    list_add_tail(&tcomplete->entry, &thread->todo);
    if (target_wait)
        wake_up_interruptible(target_wait);
    return;
    ...
}

首先注意这里是SM发起的请求,入参 proc、thread是表示的SM。对端自然是client进程端,那么定义的target_xxxx指的都是描述对端client的相关信息。

这里要重点讲解的是:

  1. 获取对端信息

    • 当前是SM发起的reply,是接收过client端发来的请求后为了反馈结果发起的请求,在transaction_stack任务栈中保存着client发来的任务,这是sm端被client从binder_thread_read唤醒binder_thread_read代码中保存的。任务里面存放的有发起者的binder_thread,是发起者发起请求时调用binder_transaction代码中赋予的:t->from = thread。因此,SM可以通过任务获取到对端信息:target_thread和target_proc。
  2. 把从SM用户空间拿到的数据进一步处理

    • BINDER_TYPE_HANDLE的处理:1 binder_get_ref通过从sm的用户空间获取到的getService要取的服务的binder引用描述fp->handle,这是sm对目标服务的binder引用以及引用描述,2 接下来调用binder_get_ref_for_node函数,就是查找/生成目标proc对该服务的引用,以及生成引用描述,这才是目标进程(client端进程)对目标服务的引用,3 把新生成的引用描述赋值给fp->handle。
    • 简单来说,就是在SM用户空间拿到的引用描述,是用来寻找server的binder实体的,然后再用binder实体生成目标进程(client)对实体的引用。

唤醒client端

binder_thread_read(kernel)

C 复制代码
static int binder_thread_read(struct binder_proc *proc,
                  struct binder_thread *thread,
                  binder_uintptr_t binder_buffer, size_t size,
                  binder_size_t *consumed, int non_block)
{
    void __user *buffer = (void __user *)(uintptr_t)binder_buffer;
    void __user *ptr = buffer + *consumed;
    void __user *end = buffer + size;

    int ret = 0;
    int wait_for_proc_work;

    if (*consumed == 0) {
        if (put_user(BR_NOOP, (uint32_t __user *)ptr))
            return -EFAULT;
        ptr += sizeof(uint32_t);
    }

    ... // 从这里唤醒

    ...

    while (1) {
        uint32_t cmd;
        struct binder_transaction_data tr;
        struct binder_work *w;
        struct binder_transaction *t = NULL;
        // 获取待处理事务
        if (!list_empty(&thread->todo)) {
            w = list_first_entry(&thread->todo, struct binder_work,
                         entry);
        } else if (!list_empty(&proc->todo) && wait_for_proc_work) {
            w = list_first_entry(&proc->todo, struct binder_work,
                         entry);
        } else {
            ...
        }

        if (end - ptr < sizeof(tr) + 4)
            break;

        switch (w->type) {
            case BINDER_WORK_TRANSACTION: {
                t = container_of(w, struct binder_transaction, work);
            } break;
            ....
        }

        if (!t)
            continue;

        // reply发起的传输事务不给target_node赋值,走else
        if (t->buffer->target_node) {
            ...
        } else {
            tr.target.ptr = 0;
            tr.cookie = 0;
            cmd = BR_REPLY;
        }
        tr.code = t->code;
        tr.flags = t->flags;
        tr.sender_euid = from_kuid(current_user_ns(), t->sender_euid);

        if (t->from) {
            struct task_struct *sender = t->from->proc->tsk;

            tr.sender_pid = task_tgid_nr_ns(sender,
                            task_active_pid_ns(current));
        } else {
            tr.sender_pid = 0;
        }

        // 数据大小
        tr.data_size = t->buffer->data_size;
        // 数据中的对象的偏移数组size(即对象个数)
        tr.offsets_size = t->buffer->offsets_size;
        // 这块是共享内存,加偏移将地址转化为用户空间虚拟地址,用户空间可以通过地址直接访问
        tr.data.ptr.buffer = (binder_uintptr_t)(
                    (uintptr_t)t->buffer->data +
                    proc->user_buffer_offset);
        tr.data.ptr.offsets = tr.data.ptr.buffer +
                    ALIGN(t->buffer->data_size,
                        sizeof(void *));

        if (put_user(cmd, (uint32_t __user *)ptr))
            return -EFAULT;
        ptr += sizeof(uint32_t);
        if (copy_to_user(ptr, &tr, sizeof(tr)))
            return -EFAULT;
        ptr += sizeof(tr);

        trace_binder_transaction_received(t);
        binder_stat_br(proc, thread, cmd);
        ...

        list_del(&t->work.entry);
        t->buffer->allow_user_free = 1;
        if (cmd == BR_TRANSACTION && !(t->flags & TF_ONE_WAY)) {
            ...
        } else {
            t->buffer->transaction = NULL;
            kfree(t);
            binder_stats_deleted(BINDER_STAT_TRANSACTION);
        }
        break;
    }

done:

    *consumed = ptr - buffer;
    ...
    return 0;
}

1 取出待处理事务,事务类型BINDER_WORK_TRANSACTION,获取到binder_transaction t

2 将t中数据转移到tr(用于数据交互的结构体),指令为BR_REPLY,然后拷贝到用户空间

3 修改consumed值指示数据被消费

来到用户空间,得到两个指令 BR_NOOP和BR_REPLY。处理过程和addService介绍的文章相同,依次取出命令处理,这里重点看BR_REPLY的处理,因为这次有返回数据回来,需要处理。

处理对端返回的数据(user)

waitForResponse

C++ 复制代码
status_t IPCThreadState::waitForResponse(Parcel *reply, status_t *acquireResult)
{
    uint32_t cmd;
    int32_t err;

    while (1) {
        if ((err=talkWithDriver()) < NO_ERROR) break;
        err = mIn.errorCheck();
        if (err < NO_ERROR) break;
        if (mIn.dataAvail() == 0) continue;
        
        cmd = (uint32_t)mIn.readInt32();
        ...

        switch (cmd) {
            ...
            case BR_REPLY:
                {
                    binder_transaction_data tr;
                    err = mIn.read(&tr, sizeof(tr));
                    ALOG_ASSERT(err == NO_ERROR, "Not enough command data for brREPLY");
                    if (err != NO_ERROR) goto finish;

                    if (reply) {
                        if ((tr.flags & TF_STATUS_CODE) == 0) {
                            reply->ipcSetDataReference(
                                reinterpret_cast<const uint8_t*>(tr.data.ptr.buffer),
                                tr.data_size,
                                reinterpret_cast<const binder_size_t*>(tr.data.ptr.offsets),
                                tr.offsets_size/sizeof(binder_size_t),
                                freeBuffer, this);
                        } else {
                            ...
                        }
                    } else {
                        ...
                    }
                }
                goto finish;
            ...
        }
    }
    ...

    return err;
}

ipcSetDataReference

C++ 复制代码
void Parcel::ipcSetDataReference(const uint8_t* data, size_t dataSize,
    const binder_size_t* objects, size_t objectsCount, release_func relFunc, void* relCookie)
{
    binder_size_t minOffset = 0;
    freeDataNoInit();
    mError = NO_ERROR;
    mData = const_cast<uint8_t*>(data);
    mDataSize = mDataCapacity = dataSize;
    //ALOGI("setDataReference Setting data size of %p to %lu (pid=%d)", this, mDataSize, getpid());
    mDataPos = 0;
    ALOGV("setDataReference Setting data pos of %p to %zu", this, mDataPos);
    mObjects = const_cast<binder_size_t*>(objects);
    mObjectsSize = mObjectsCapacity = objectsCount;
    mNextObjectHint = 0;
    mOwner = relFunc;
    mOwnerCookie = relCookie;
    for (size_t i = 0; i < mObjectsSize; i++) {
        binder_size_t offset = mObjects[i];
        if (offset < minOffset) {
            ALOGE("%s: bad object offset %" PRIu64 " < %" PRIu64 "\n",
                  __func__, (uint64_t)offset, (uint64_t)minOffset);
            mObjectsSize = 0;
            break;
        }
        minOffset = offset + sizeof(flat_binder_object);
    }
    scanForFds();
}

数据放入到 Parcel reply,执行完毕。

回到发起getService的调用

C++ 复制代码
virtual sp<IBinder> checkService( const String16& name) const
{
    Parcel data, reply;
    // int32的整形数+字符串(字符串是"android.os.IServiceManager")
    data.writeInterfaceToken(IServiceManager::getInterfaceDescriptor());
    // 服务的名称,即"media.player"
    data.writeString16(name);
    remote()->transact(CHECK_SERVICE_TRANSACTION, data, &reply);
    return reply.readStrongBinder();
}

可以看出remote()->transact执行完毕,reply也已经有了数据,最后执行 reply.readStrongBinder()

readStrongBinder

C++ 复制代码
sp<IBinder> Parcel::readStrongBinder() const
{
    sp<IBinder> val;
    unflatten_binder(ProcessState::self(), *this, &val);
    return val;
}

unflatten_binder

C++ 复制代码
status_t unflatten_binder(const sp<ProcessState>& proc,
    const Parcel& in, sp<IBinder>* out)
{
    const flat_binder_object* flat = in.readObject(false);

    if (flat) {
        switch (flat->type) {
            case BINDER_TYPE_BINDER:
                *out = reinterpret_cast<IBinder*>(flat->cookie);
                return finish_unflatten_binder(NULL, *flat, in);
            // BINDER_TYPE_HANDLE 执行这里
            case BINDER_TYPE_HANDLE:
                *out = proc->getStrongProxyForHandle(flat->handle);
                return finish_unflatten_binder(
                    static_cast<BpBinder*>(out->get()), *flat, in);
        }
    }
    return BAD_TYPE;
}
getStrongProxyForHandle
C++ 复制代码
sp<IBinder> ProcessState::getStrongProxyForHandle(int32_t handle)
{
    sp<IBinder> result;

    AutoMutex _l(mLock);
    // 通过 handle 在矢量数组mHandleToObject中查找返回,如果没有就新建空壳结构体返回
    handle_entry* e = lookupHandleLocked(handle);

    if (e != NULL) {
        IBinder* b = e->binder; // 空
        if (b == NULL || !e->refs->attemptIncWeak(this)) {
            if (handle == 0) { // 非 ServiceManager,非0
                ...
            }

            // 创建BpBinder,记录有handle
            b = new BpBinder(handle); 
            // bpbinder以及其弱引用存到handle_entry
            e->binder = b;
            if (b) e->refs = b->getWeakRefs();
            result = b;
        } else {
            result.force_set(b);
            e->refs->decWeak(this);
        }
    }

    return result;
}

主要操作:创建一个BpBinder并记录有handle值

继续返回到getMediaPlayerService

C++ 复制代码
/*static*/const sp<IMediaPlayerService>
IMediaDeathNotifier::getMediaPlayerService()
{
    Mutex::Autolock _l(sServiceLock);
    if (sMediaPlayerService == 0) {
        sp<IServiceManager> sm = defaultServiceManager();
        sp<IBinder> binder;
        do {
            // bpbinder
            binder = sm->getService(String16("media.player"));
            if (binder != 0) {
                break;
            }
            usleep(500000); // 0.5 s
        } while (true);

        if (sDeathNotifier == NULL) {
            sDeathNotifier = new DeathNotifier();
        }
        binder->linkToDeath(sDeathNotifier);
        sMediaPlayerService = interface_cast<IMediaPlayerService>(binder);
    }
    return sMediaPlayerService;
}

interface_cast() 模板

interface_cast<IMediaPlayerService>(binder)

c++ 复制代码
template<typename INTERFACE>
inline sp<INTERFACE> interface_cast(const sp<IBinder>& obj)
{
    return INTERFACE::asInterface(obj);
}
c++ 复制代码
// frameworks/av/media/libmedia/IMediaPlayerService.cpp
IMPLEMENT_META_INTERFACE(MediaPlayerService, "android.media.IMediaPlayerService");

和ServiceManager一样套用模板,具体看《Binder机制 - ServiceManager的获取》,这里直接给出转换后的代码

c++ 复制代码
android::sp<IMediaPlayerService> IMediaPlayerService::asInterface(const android::sp<android::IBinder>& obj)
{
   android::sp<IMediaPlayerService> intr;
   if (obj != NULL) {
       intr = static_cast<IMediaPlayerService*>(
           obj->queryLocalInterface(
                   IMediaPlayerService::descriptor).get());
       if (intr == NULL) {
           intr = new BpServiceManager(obj);
       }
   }
   return intr;
}
相关推荐
拭心3 小时前
Google 提供的 Android 端上大模型组件:MediaPipe LLM 介绍
android
带电的小王6 小时前
WhisperKit: Android 端测试 Whisper -- Android手机(Qualcomm GPU)部署音频大模型
android·智能手机·whisper·qualcomm
梦想平凡6 小时前
PHP 微信棋牌开发全解析:高级教程
android·数据库·oracle
元争栈道6 小时前
webview和H5来实现的android短视频(短剧)音视频播放依赖控件
android·音视频
阿甘知识库7 小时前
宝塔面板跨服务器数据同步教程:双机备份零停机
android·运维·服务器·备份·同步·宝塔面板·建站
元争栈道8 小时前
webview+H5来实现的android短视频(短剧)音视频播放依赖控件资源
android·音视频
MuYe8 小时前
Android Hook - 动态加载so库
android
居居飒9 小时前
Android学习(四)-Kotlin编程语言-for循环
android·学习·kotlin
Henry_He12 小时前
桌面列表小部件不能点击的问题分析
android
工程师老罗12 小时前
Android笔试面试题AI答之Android基础(1)
android