Binder - 3、注册Service的过程

一、前言

初始化完servicemanager之后,那么我们就可以往里面注册我们的Service了,对一些系统Service来说,它们提供了普通的Android应用赖以实现的接口,所以需要在系统启动之后就注册到servicemanager里,那么就有两个问题需要搞清楚:

  • 1、servicemanager的引用如何拿到?

  • 2、注册Service如何进行?

本篇我们就以这两个问题展开,我们使用SurfaceFlinger作为切入点,虽然分析AIDL与Java层Service层绑定也能做到同样的效果,但SurfaceFlinger作为native层的Service,与servicemanager的距离更近,我们需要转换的次数更少。

二、 Client进程-addService

SurfaceFlinger的入口在frameworks\native\services\surfaceflinger\main_surfaceflinger.cpp,同样是main函数:

c++ 复制代码
int main(int, char**) {

   

    //设置最大线程数量

    ProcessState::self()->setThreadPoolMaxThreadCount(4);

    //开启线程

    sp<ProcessState> ps(ProcessState::self());

    ps->startThreadPool();

    //创建SurfaceFlinger对象

    sp<SurfaceFlinger> flinger = surfaceflinger::createSurfaceFlinger();

    flinger->init();

    //执行addService

    sp<IServiceManager> sm(defaultServiceManager());

    sm->addService(String16(SurfaceFlinger::getServiceName()), flinger, false,

                   IServiceManager::DUMP_FLAG_PRIORITY_CRITICAL | IServiceManager::DUMP_FLAG_PROTO);

    //启动SurfaceFlinger的消息队列

    flinger->run();

    return 0;

}

可以看到是通过defaultServiceManager拿到了IServiceManager对象:

2.1 Client进程-获取servicemanager

frameworks\native\libs\binder\IServiceManager.cpp

c++ 复制代码
using AidlServiceManager = android::os::IServiceManager;

  


sp<IServiceManager> defaultServiceManager()

{

    std::call_once(gSmOnce, []() {

        sp<AidlServiceManager> sm = nullptr;

        while (sm == nullptr) {

            //getContextObject

            sm = interface_cast<AidlServiceManager>(ProcessState::self()->getContextObject(nullptr));

            if (sm == nullptr) {

                ALOGE("Waiting 1s on context object on %s.", ProcessState::self()->getDriverName().c_str());

                sleep(1);

            }

        }

        //新建ServiceManagerShim对象

        gDefaultServiceManager = sp<ServiceManagerShim>::make(sm);

    });

    return gDefaultServiceManager;

}

该方法内部只执行一次,首先是从ProcessState中获取getContextObject,如果没获取到的话,sleep 1s再次获取,获取到之后,新建ServiceManagerShim对象。

这段代码有三个要素:

  • 1、ProcessState::self()->getContextObject

  • 2、interface_cast

  • 3、ServiceManagerShim对象

我们依次看一下:

2.1.1 Client进程-获取ProcessState-getContextObject

frameworks\native\libs\binder\ProcessState.cpp

ProcessState我们知道是单例,一个进程中存在一个对象,直接看getContextObject方法:

c++ 复制代码
sp<IBinder> ProcessState::getContextObject(const sp<IBinder>& /*caller*/)

{

    sp<IBinder> context = getStrongProxyForHandle(0);

    return context;

}

直接调用getStrongProxyForHandle,参数为0:

c++ 复制代码
sp<IBinder> ProcessState::getStrongProxyForHandle(int32_t handle)

{

    sp<IBinder> result;

    handle_entry* e = lookupHandleLocked(handle);

    if (e != nullptr) {

        IBinder* b = e->binder;

        if (b == nullptr || !e->refs->attemptIncWeak(this)) {

            //!! handle为0的情况

            if (handle == 0) {

                IPCThreadState* ipc = IPCThreadState::self();

                Parcel data;

                //ping一下,看下对方是否还活着

                status_t status = ipc->transact(

                        0, IBinder::PING_TRANSACTION, data, nullptr, 0);

                if (status == DEAD_OBJECT)

                   return nullptr;

            }

            //创建BpBinder对象,handle为0

            sp<BpBinder> b = BpBinder::create(handle);

            e->binder = b.get();

            if (b) e->refs = b->getWeakRefs();

            result = b;

        } else {

            result.force_set(b);

            e->refs->decWeak(this);

        }

    }

    return result;

}

这里直接create了一个BpBinder,参数为0 ,然后将其直接作为IBinder返回,BpBinder是远端对象在当前进程的代理,当handle为0的时候,代表的正是servicemanager

frameworks\native\libs\binder\BpBinder.cpp

c++ 复制代码
class BpBinder : public IBinder

  


sp<BpBinder> BpBinder::create(const sp<RpcSession>& session, const RpcAddress& address) {

    return sp<BpBinder>::make(RpcHandle{session, address});

}

  


BpBinder::BpBinder(Handle&& handle)

      : mStability(0),

        mHandle(handle),

        mAlive(true),

        mObitsSent(false),

        mObituaries(nullptr),

        mTrackedUid(-1) {

    extendObjectLifetime(OBJECT_LIFETIME_WEAK);

}

BpBinder继承自IBinder,创建过程没有太多花里胡哨的东西。

那么就代表着,我们的getContextObject最终返回了BpBinder(0)

2.1.2 Client进程-接口转换之interface_cast

frameworks\native\include\binder\IInterface.h

c++ 复制代码
template<typename INTERFACE>

inline sp<INTERFACE> interface_cast(const sp<IBinder>& obj)

{

    return INTERFACE::asInterface(obj);

}

这是一个模板函数,调用INTERFACEasInterface函数,INTERFACEAidlServiceManager,而AidlServiceManager只是android::os::IServiceManager的别名,我们在代码里面直接搜索这个类是搜不到的,这是因为从Android12开始,android::os::IServiceManager开始使用AIDL来实现,路径是frameworks/native/libs/binder/aidl/android/os/IServiceManager.aidl,最终编译完的代码在out/soong/.intermediates/frameworks/native/libs/binder/libbinder/android_native_bridge_arm64_armv8-a_shared/gen/aidl/android/os/IServiceManager.cpp

我们在生成的代码中可以看到两个宏:

c++ 复制代码
DECLARE_META_INTERFACE(ServiceManager);

DO_NOT_DIRECTLY_USE_ME_IMPLEMENT_META_INTERFACE(ServiceManager, "android.os.IServiceManager");

这两个宏的定义为:

c++ 复制代码
 #define DECLARE_META_INTERFACE(INTERFACE)                               \

public:                                                                 \

    static const ::android::String16 descriptor;                        \

    static ::android::sp<I##INTERFACE> asInterface(                     \

            const ::android::sp<::android::IBinder>& obj);              \

    virtual const ::android::String16& getInterfaceDescriptor() const;  \

    I##INTERFACE();                                                     \

    virtual ~I##INTERFACE();                                            \

    static bool setDefaultImpl(std::unique_ptr<I##INTERFACE> impl);     \

    static const std::unique_ptr<I##INTERFACE>& getDefaultImpl();       \

private:                                                                \

    static std::unique_ptr<I##INTERFACE> default_impl;                  \

public:  

  


 #define DO_NOT_DIRECTLY_USE_ME_IMPLEMENT_META_INTERFACE(INTERFACE, NAME)\

    const ::android::StaticString16                                     \

        I##INTERFACE##_descriptor_static_str16(__IINTF_CONCAT(u, NAME));\

    const ::android::String16 I##INTERFACE::descriptor(                 \

        I##INTERFACE##_descriptor_static_str16);                        \

    const ::android::String16&                                          \

            I##INTERFACE::getInterfaceDescriptor() const {              \

        return I##INTERFACE::descriptor;                                \

    }                                                                   \

    ::android::sp<I##INTERFACE> I##INTERFACE::asInterface(              \

            const ::android::sp<::android::IBinder>& obj)               \

    {                                                                   \

        ::android::sp<I##INTERFACE> intr;                               \

        if (obj != nullptr) {                                           \

            intr = ::android::sp<I##INTERFACE>::cast(                   \

                obj->queryLocalInterface(I##INTERFACE::descriptor));    \

            if (intr == nullptr) {                                      \

                intr = ::android::sp<Bp##INTERFACE>::make(obj);         \

            }                                                           \

        }                                                               \

        return intr;                                                    \

    }                                                                   \

    std::unique_ptr<I##INTERFACE> I##INTERFACE::default_impl;           \

    bool I##INTERFACE::setDefaultImpl(std::unique_ptr<I##INTERFACE> impl)\

    {                                                                   \

        /* Only one user of this interface can use this function     */ \

        /* at a time. This is a heuristic to detect if two different */ \

        /* users in the same process use this function.              */ \

        assert(!I##INTERFACE::default_impl);                            \

        if (impl) {                                                     \

            I##INTERFACE::default_impl = std::move(impl);               \

            return true;                                                \

        }                                                               \

        return false;                                                   \

    }                                                                   \

    const std::unique_ptr<I##INTERFACE>& I##INTERFACE::getDefaultImpl() \

    {                                                                   \

        return I##INTERFACE::default_impl;                              \

    }                                                                   \

    I##INTERFACE::I##INTERFACE() { }                                    \

    I##INTERFACE::~I##INTERFACE() { }                                   \

  


 #define CHECK_INTERFACE(interface, data, reply)                         \

    do {                                                                \

      if (!(data).checkInterface(this)) { return PERMISSION_DENIED; }   \

    } while (false)                                                     \

  

DECLARE_META_INTERFACE.h文件中调用,代表声明这些函数,DO_NOT_DIRECTLY_USE_ME_IMPLEMENT_META_INTERFACE.cpp文件中,代表对这些函数的实现,我们以asInterface为例,做宏展开,那么展开之后的代码是这样:

c++ 复制代码
::android::sp<IServiceManager> ServiceManager::asInterface(

    const ::android::sp<::android::IBinder>& obj)

{

    ::android::sp<IServiceManager> intr;

    if (obj != nullptr) {

        intr = ::android::sp<IServiceManager>::cast(

            obj->queryLocalInterface(IServiceManager::descriptor));

        if (intr == nullptr) {

            intr = ::android::sp<BpServiceManager>::make(obj);

        }

    }

    return intr;

}

最终返回一个BpServiceManager,那么BpServiceManager是何许人也?

2.1.3 BpServiceManager的变迁

在Android11以前,BpServiceManager是实现在/frameworks/native/libs/binder/IServiceManager.cpp里面:

c++ 复制代码
class BpServiceManager : public BpInterface<IServiceManager>

在Android11及之后,IServiceManager.cpp由AIDL生成,那么BpServiceManager也在其中,除了BpServiceManager之外,还有个BnServiceManager,这两个分别是Binder的客户端代理和服务端实现,BpServiceManager中的p代表proxyBnServiceManager中的n代表native,一般来说,BpBn是成对出现的,后面涉及到别的服务,实现也比较类似。

也就是说,interface_cast方法最终返回了BpServiceManager

2.1.4 ServiceManager的影子ServiceManagerShim

最后一步,使用上面获取到的BpServiceManager来新建一个ServiceManagerShim对象:

frameworks\native\libs\binder\IServiceManager.cpp

c++ 复制代码
ServiceManagerShim::ServiceManagerShim(const sp<AidlServiceManager>& impl)

 : mTheRealServiceManager(impl)

{}

构造函数中,将我们传入的BpServiceManager对象设置给了mTheRealServiceManager,从名字来看,ServiceManagerShim只是一层中介而已,最终干活的还是mTheRealServiceManager

到此为止,defaultServiceManager拿到了结果,就是ServiceManagerShim,其中真正的对象是BpServiceManager

2.2 ServiceManageraddService

现在我们拿到了defaultServiceManager,接下来开始执行addService:

c++ 复制代码
//这里getServiceName返回"SurfaceFlinger"

sm->addService(String16(SurfaceFlinger::getServiceName()), flinger, false,

                   IServiceManager::DUMP_FLAG_PRIORITY_CRITICAL | IServiceManager::DUMP_FLAG_PROTO);

调用ServiceManagerShimaddService方法:

c++ 复制代码
status_t ServiceManagerShim::addService(const String16& name, const sp<IBinder>& service,

                                        bool allowIsolated, int dumpsysPriority)

{

    Status status = mTheRealServiceManager->addService(

        String8(name).c_str(), service, allowIsolated, dumpsysPriority);

    return status.exceptionCode();

}

可以看到,这的确只是个"中介",我们知道mTheRealServiceManager就是BpServiceManager

c++ 复制代码
::android::binder::Status BpServiceManager::addService(const ::std::string &name, const ::android::sp<::android::IBinder> &service, bool allowIsolated, int32_t dumpPriority)

{

    ::android::Parcel _aidl_data;

    _aidl_data.markForBinder(remoteStrong());

    ::android::Parcel _aidl_reply;

    ::android::status_t _aidl_ret_status = ::android::OK;

    ::android::binder::Status _aidl_status;

    _aidl_ret_status = _aidl_data.writeInterfaceToken(getInterfaceDescriptor());

    //写入name

    _aidl_ret_status = _aidl_data.writeUtf8AsUtf16(name);

    //写入引用

    _aidl_ret_status = _aidl_data.writeStrongBinder(service);

    //实际通信

    _aidl_ret_status = remote()->transact(BnServiceManager::TRANSACTION_addService, _aidl_data, &_aidl_reply, 0);

    //读取返回值

    _aidl_ret_status = _aidl_status.readFromParcel(_aidl_reply);

    if (((_aidl_ret_status) != (::android::OK)))

    {

        goto _aidl_error;

    }

    if (!_aidl_status.isOk())

    {

        return _aidl_status;

    }

_aidl_error:

    _aidl_status.setFromStatusT(_aidl_ret_status);

    return _aidl_status;

}
  • 1、填充data,数据结构为Parcel

  • 2、实际通信

  • 3、读取返回值

数据写入中,最终要的就是写入对service的应用了,我们来看一下writeStrongBinder

2.2.1 Client进程-writeStrongBinder写入Binder引用

c++ 复制代码
status_t Parcel::writeStrongBinder(const sp<IBinder>& val)

{

    return flattenBinder(val);

}

  


status_t Parcel::flattenBinder(const sp<IBinder>& binder)

{

    flat_binder_object obj;

    obj.flags = FLAT_BINDER_FLAG_ACCEPTS_FDS;

    if (binder != nullptr) {

        BBinder *local = binder->localBinder();

        //是否为Server端

        if (!local) {

            BpBinder *proxy = binder->remoteBinder();

            const int32_t handle = proxy ? proxy->getPrivateAccessorForId().binderHandle() : 0;

            obj.hdr.type = BINDER_TYPE_HANDLE;

            obj.binder = 0; /* Don't pass uninitialized stack data to a remote process */

            obj.handle = handle;

            obj.cookie = 0;

        } else {

            obj.hdr.type = BINDER_TYPE_BINDER;

            obj.binder = reinterpret_cast<uintptr_t>(local->getWeakRefs());

            obj.cookie = reinterpret_cast<uintptr_t>(local);

        }

    } else {

        obj.hdr.type = BINDER_TYPE_BINDER;

        obj.binder = 0;

        obj.cookie = 0;

    }

    obj.flags |= schedBits;

    status_t status = writeObject(obj, false);

    if (status != OK) return status;

}

flattenBinder方法,对Client端和Server端表现不同,如果是Client端,通信的的代表是BpBinder,通过remoteBinder获得,如果是Server端,通信的代表是BBinder,通过localBinder获得,也就是Server本身。我们这里是Server端,所以localBinder不为空,然后将获取到的local及其弱引用的地址写入bindercookie字段中。

2.2.2 Client进程-事务处理-transact

c++ 复制代码
_aidl_ret_status = remote()->transact(BnServiceManager::TRANSACTION_addService, _aidl_data, &_aidl_reply, 0);

这里的remote()就是我们传入的BpBinder(0),进入transact方法:

c++ 复制代码
status_t BpBinder::transact(

    uint32_t code, const Parcel& data, Parcel* reply, uint32_t flags)

{

    if (mAlive) {

        flags = flags & ~FLAG_PRIVATE_VENDOR;

        status_t status;

        if (CC_UNLIKELY(isRpcBinder())) {

            status = rpcSession()->transact(rpcAddress(), code, data, reply, flags);

        } else {

            //调用IPCThreadState的transact()

            status = IPCThreadState::self()->transact(binderHandle(), code, data, reply, flags);

        }

        if (status == DEAD_OBJECT) mAlive = 0;

        return status;

    }

    return DEAD_OBJECT;

}

可以看到,直接将行为委托给了IPCThreadState:

c++ 复制代码
status_t IPCThreadState::transact(int32_t handle,

                                  uint32_t code, const Parcel& data,

                                  Parcel* reply, uint32_t flags)

{

    status_t err;

    flags |= TF_ACCEPT_FDS;

    //准备需要通信的数据

    err = writeTransactionData(BC_TRANSACTION, flags, handle, code, data, nullptr);

    if ((flags & TF_ONE_WAY) == 0) {

        if (reply) {

            //等待结果

            err = waitForResponse(reply);

        } else {

            Parcel fakeReply;

            //如果不需要返回值,也需要构造一个虚假的reply,以同步执行

            err = waitForResponse(&fakeReply);

        }

    } else {

        //异步执行

        err = waitForResponse(nullptr, nullptr);

    }

    return err;

}

先准备需要通信的数据,然后等待数据的返回,这里听名字好像有点奇怪,怎么我还没开始通信,就等待数据返回了?其实答案就在waitForResponse中,我们来先看下writeTransactionData的实现:

2.2.3 Client进程-通信事务binder_transaction_data写入

c++ 复制代码
status_t IPCThreadState::writeTransactionData(int32_t cmd, uint32_t binderFlags,

    int32_t handle, uint32_t code, const Parcel& data, status_t* statusBuffer)

{

    binder_transaction_data tr;

    tr.target.ptr = 0; /* Don't pass uninitialized stack data to a remote process */

    tr.target.handle = handle;

    tr.code = code;

    tr.flags = binderFlags;

    tr.cookie = 0;

    tr.sender_pid = 0;

    tr.sender_euid = 0;

    const status_t err = data.errorCheck();

    //max(mDataSize,mDataPos)

    tr.data_size = data.ipcDataSize();

    //mData指针

    tr.data.ptr.buffer = data.ipcData();

    //所有指针加起来的尺寸

    tr.offsets_size = data.ipcObjectsCount()*sizeof(binder_size_t);

    //mObjects指针

    tr.data.ptr.offsets = data.ipcObjects();

    mOut.writeInt32(cmd);

    mOut.write(&tr, sizeof(tr));

    return NO_ERROR;

}

这里引入了一个binder_transaction_data数据结构,是定义在kernel里面的,是与驱动层通信使用的数据结构,代表一次通信事务,写入了mOut中,我们在后面也能看到它。

图2.1 - binder_transaction_data的基本结构

接下来进入waitForResponse

2.2.3 Client进程-准备与driver交互-waitForResponse

c++ 复制代码
status_t IPCThreadState::waitForResponse(Parcel *reply, status_t *acquireResult)

{

    uint32_t cmd;

    int32_t err;

    while (1) {

        //与驱动进行交互

        if ((err=talkWithDriver()) < NO_ERROR) break;

        if (mIn.dataAvail() == 0) continue;

        cmd = (uint32_t)mIn.readInt32();

        switch (cmd) {

            //...

        default:

            err = executeCommand(cmd);

            if (err != NO_ERROR) goto finish;

            break;

        }

    }

    return err;

}

这是一个while循环,在循环中首先与驱动进行通信。(方法名其实有点迷惑)talkWithDriver顾名思义,就是与Binder驱动进行通信:

2.2.4 Client进程-真正与driver通信-talkWithDriver

c++ 复制代码
status_t IPCThreadState::talkWithDriver(bool doReceive)

{

    binder_write_read bwr;

    //读缓冲区是否有数据

    const bool needRead = mIn.dataPosition() >= mIn.dataSize();

    //写缓冲区数据大小

    const size_t outAvail = (!doReceive || needRead) ? mOut.dataSize() : 0;

    bwr.write_size = outAvail;

    bwr.write_buffer = (uintptr_t)mOut.data();

    if (doReceive && needRead) {

        bwr.read_size = mIn.dataCapacity();

        bwr.read_buffer = (uintptr_t)mIn.data();

    } else {

        bwr.read_size = 0;

        bwr.read_buffer = 0;

    }

    //既不需要读,也不需要写

    if ((bwr.write_size == 0) && (bwr.read_size == 0)) return NO_ERROR;

    status_t err;

    do {

        //系统调用,参数是bwr的引用

        if (ioctl(mProcess->mDriverFD, BINDER_WRITE_READ, &bwr) >= 0)

            err = NO_ERROR;

        else

            err = -errno;

    } while (err == -EINTR);

    return err;

}

这里引入了一个叫做binder_write_read的数据结构,是Binder驱动与应用层之间通信的主要数据结构。这里mIn代表的是输入的数据,也就是需要读取出来的数据,mOut代表需要写入的数据,我们现在是写入操作,mOut不为空。

图2.2 - binder_write_read的结构

2.2.5 切换内核态-binder_ioctl_write_read

Kernel\drivers\android\binder.c

我们已经分析过Binder驱动的初始化,当应用层通过ioctl系统调用进入驱动之后,由binder_ioctl来承接后续操作,我们的cmd是BINDER_WRITE_READ,所以进入binder_ioctl_write_read方法:

c++ 复制代码
static int binder_ioctl_write_read(struct file *filp,

                unsigned int cmd, unsigned long arg,

                struct binder_thread *thread)

{

    int ret = 0;

    struct binder_proc *proc = filp->private_data;

    void __user *ubuf = (void __user *)arg;

    struct binder_write_read bwr;

    //从用户进程拷贝binder_write_read数据结构

    if (copy_from_user(&bwr, ubuf, sizeof(bwr))) {

        ret = -EFAULT;

        goto out;

    }

    //写操作

    if (bwr.write_size > 0) {

        ret = binder_thread_write(proc, thread,

                      bwr.write_buffer,

                      bwr.write_size,

                      &bwr.write_consumed);

    }

    //读操作

    if (bwr.read_size > 0) {

        ret = binder_thread_read(proc, thread, bwr.read_buffer,

                     bwr.read_size,

                     &bwr.read_consumed,

                     filp->f_flags & O_NONBLOCK);

        if (!binder_worklist_empty_ilocked(&proc->todo))

            binder_wakeup_proc_ilocked(proc);

    }

    //写回给用户态

    if (copy_to_user(ubuf, &bwr, sizeof(bwr))) {

        ret = -EFAULT;

        goto out;

    }

    return ret;

}
  • 1、首先从用户进程拷贝binder_write_read数据结构,我们通常说Binder只发生了"一次拷贝",难道就是这次?其实不是,因为binder_write_read只包含了一些size与ptr,拷贝不是一件太耗时的操作,普遍意义上指的"一次拷贝"都指的是数据的拷贝,这个我们后面再展开。所以严格意义上说,Binder也并不是"一次拷贝"

  • 2、如果写入的数据不为空,执行写操作

  • 3、如果读取的数据不为空,执行读操作

由于我们目前主要关心数据的写操作,所以看binder_thread_write

2.2.6 内核态-binder写入-binder_thread_write

Kernel\drivers\android\binder.c

c++ 复制代码
static int binder_thread_write(struct binder_proc *proc,

            struct binder_thread *thread,

            binder_uintptr_t binder_buffer, size_t size,

            binder_size_t *consumed)

{

    uint32_t cmd;

    struct binder_context *context = proc->context;

    void __user *buffer = (void __user *)(uintptr_t)binder_buffer;

    void __user *ptr = buffer + *consumed;

    void __user *end = buffer + size;

    while (ptr < end && thread->return_error.cmd == BR_OK) {

        int ret;

        if (get_user(cmd, (uint32_t __user *)ptr))

            return -EFAULT;

        ptr += sizeof(uint32_t);

        switch (cmd) {

        case BC_TRANSACTION:

        case BC_REPLY: {

            struct binder_transaction_data tr;

            //从用户进程读取binder_transaction_data

            if (copy_from_user(&tr, ptr, sizeof(tr)))

                return -EFAULT;

            ptr += sizeof(tr);

            binder_transaction(proc, thread, &tr,

                       cmd == BC_REPLY, 0);

            break;

        }

        *consumed = ptr - buffer;

        }

    }

    return 0;

}

这其实是一个非常长的函数,但是我们在mOut中写的cmd是BC_TRANSACTION,所以只关注这块逻辑。

先从用户空间将binder_transaction_data拷贝到内核空间,然后执行binder_transaction:

2.2.7 内核态-binder事务处理-binder_transaction

c++ 复制代码
static void binder_transaction(struct binder_proc *proc,

                   struct binder_thread *thread,

                   struct binder_transaction_data *tr, int reply,

                   binder_size_t extra_buffers_size)

{

    int ret;

    struct binder_transaction *t;

    struct binder_work *w;

    struct binder_work *tcomplete;

    binder_size_t buffer_offset = 0;

    binder_size_t off_start_offset, off_end_offset;

    binder_size_t off_min;

    binder_size_t sg_buf_offset, sg_buf_end_offset;

    struct binder_proc *target_proc = NULL;

    struct binder_thread *target_thread = NULL;

    struct binder_node *target_node = NULL;

    struct binder_transaction *in_reply_to = NULL;

    uint32_t return_error = 0;

    struct binder_context *context = proc->context;

    char *secctx = NULL;

    u32 secctx_sz = 0;

  


    if (reply) {

        //...

    } else {

        //此时我们作为ServiceManager,handle为0

        if (tr->target.handle) {

            //...

        } else {

            //走这里

            target_node = context->binder_context_mgr_node;

            if (target_node)

                //node和proc临时引用+1,对target_proc赋值

                target_node = binder_get_node_refs_for_txn(

                        target_node, &target_proc,

                        &return_error);

        }

        //取还没来得及执行的任务,这里假设为空

        w = list_first_entry_or_null(&thread->todo,

                         struct binder_work, entry);

    }

    if (target_thread)

        e->to_thread = target_thread->pid;

    e->to_proc = target_proc->pid;

    t = kzalloc(sizeof(*t), GFP_KERNEL);

    INIT_LIST_HEAD(&t->fd_fixups);

    tcomplete = kzalloc(sizeof(*tcomplete), GFP_KERNEL);

    if (!reply && !(tr->flags & TF_ONE_WAY))

        //标记binder事务的来源,后面reply的时候会从这里回来

        t->from = thread;

    else

        t->from = NULL;

    t->sender_euid = task_euid(proc->tsk);

    t->to_proc = target_proc;

    t->to_thread = target_thread;

    t->code = tr->code;

    t->flags = tr->flags;

    //...

    //在target端申请一块Buffer,并分配物理页面,此时在内核空间,mDatas和mObject的数据结构是连在一起的

    //和著名的"一次拷贝"有关,我们另起一篇文章来分析

    t->buffer = binder_alloc_new_buf(&target_proc->alloc, tr->data_size,

        tr->offsets_size, extra_buffers_size,

        !reply && (t->flags & TF_ONE_WAY), current->tgid);

    t->buffer->debug_id = t->debug_id;

    t->buffer->transaction = t;

    t->buffer->target_node = target_node;

    t->buffer->clear_on_free = !!(t->flags & TF_CLEAR_BUF);

  


    //这两次拷贝用户进程数据,重要程度五颗星!

    //拷贝Data数据

    if (binder_alloc_copy_user_to_buffer(

                &target_proc->alloc,

                t->buffer, 0,

                (const void __user *)

                    (uintptr_t)tr->data.ptr.buffer,

                tr->data_size)) {

        //error

    }

    //拷贝偏移量

    if (binder_alloc_copy_user_to_buffer(

                &target_proc->alloc,

                t->buffer,

                ALIGN(tr->data_size, sizeof(void *)),

                (const void __user *)

                    (uintptr_t)tr->data.ptr.offsets,

                tr->offsets_size)) {

        //error

    }

  


    //获取offset

    off_start_offset = ALIGN(tr->data_size, sizeof(void *));

    buffer_offset = off_start_offset;

    off_end_offset = off_start_offset + tr->offsets_size;

    sg_buf_offset = ALIGN(off_end_offset, sizeof(void *));

    sg_buf_end_offset = sg_buf_offset + extra_buffers_size -

        ALIGN(secctx_sz, sizeof(u64));

    off_min = 0;

    //根据offset来切分data。

    for (buffer_offset = off_start_offset; buffer_offset < off_end_offset;

         buffer_offset += sizeof(binder_size_t)) {

        struct binder_object_header *hdr;

        size_t object_size;

        struct binder_object object;

        binder_size_t object_offset;

  


        //从刚才的Binder_buffer中读取object,并存到object_offset中

        if (binder_alloc_copy_from_buffer(&target_proc->alloc,

                          &object_offset,

                          t->buffer,

                          buffer_offset,

                          sizeof(object_offset))) {

           

        }

  


        object_size = binder_get_object(target_proc, t->buffer,

                        object_offset, &object);

        hdr = &object.hdr;

        off_min = object_offset + object_size;

        //此时type为BINDER_TYPE_BINDER,见'writeStrongBinder'

        switch (hdr->type) {

            case BINDER_TYPE_BINDER:

            case BINDER_TYPE_WEAK_BINDER: {

                struct flat_binder_object *fp;

                //从hdr中获取fp

                fp = to_flat_binder_object(hdr);

                //赋值fp,给handle赋值

                ret = binder_translate_binder(fp, t, thread);

                if (ret < 0 ||

                    //copy flat_binder_object对象,也就是Binder对象到target进程

                    binder_alloc_copy_to_buffer(&target_proc->alloc,

                                t->buffer,

                                object_offset,

                                fp, sizeof(*fp))) {

                    //error

                }

            } break;

        }

    }

    //如果不需要返回值,那么这次调用就可以结束了,Binder最终将其转换为BR_TRANSACTION_COMPLETE,IPCThreadState可以感知并作出改动

    if (t->buffer->oneway_spam_suspect)

        tcomplete->type = BINDER_WORK_TRANSACTION_ONEWAY_SPAM_SUSPECT;

    else

        tcomplete->type = BINDER_WORK_TRANSACTION_COMPLETE;

    //后文会用到

    t->work.type = BINDER_WORK_TRANSACTION;

  


    if (reply) {

        //...

    } else if (!(t->flags & TF_ONE_WAY)) {

        //带oneway的情况

       

    } else {

        //默认情况

        //添加tcomplete到工作list,也就是上面所说的情况

        binder_enqueue_thread_work(thread, tcomplete);

        //唤醒对端

        return_error = binder_proc_transaction(t, target_proc, NULL);

    }

    return;

}

如果该次请求不需要返回值,也不需要等待,那么这次调用可以立即结束,binder驱动会发出一个BR_TRANSACTION_COMPLETE事件,当IPCThreadState感知到该事件时,就可以正常结束。

这里面还涉及到一个很重要的方法binder_proc_transaction:

C++ 复制代码
static int binder_proc_transaction(struct binder_transaction *t,

                    struct binder_proc *proc,

                    struct binder_thread *thread)

{

    struct binder_node *node = t->buffer->target_node;

    struct binder_priority node_prio;

    bool pending_async = false;

  


    if (!thread && !pending_async)

        thread = binder_select_thread_ilocked(proc);

  


    //添加到工作list

    if (thread) {

        binder_transaction_priority(thread->task, t, node_prio,

                        node->inherit_rt);

        binder_enqueue_thread_work_ilocked(thread, &t->work);

    } else if (!pending_async) {

        //如果没有线程,添加到todo列表

        binder_enqueue_work_ilocked(&t->work, &proc->todo);

    } else {

        //如果不需要同步,添加到异步todo列表

        binder_enqueue_work_ilocked(&t->work, &node->async_todo);

    }

  


    //唤醒线程(epoll机制)

    if (!pending_async)

        binder_wakeup_thread_ilocked(proc, thread, !oneway /* sync */);

  


    return 0;

}

该方法会唤醒对端的线程,从而使得对方进程开始处理数据,这里的对端进程也就是service_manager

binder_transaction是一个非常长,且至关重要的函数,里面做了很多事情,总结下来就是:

  • 1、先获取目标binder_node,也就是目标进程的binder节点,我们这里是service_manager,所以直接从binder_context_mgr_node中获取

  • 2、初始化binder_transaction,申请一块buffer并为其分配物理页面

  • 3、拷贝用户进程的数据到内核空间

  • 4、当类型为BINDER_TYPE_BINDER时,对flat_binder_objecthandle赋值

  • 5、将Parcel中的object拷贝到对应进程的对应偏移量上去

  • 6、将此任务添加到待处理队列中

  • 7、唤醒对端线程

到这里,我们完成了用户进程数据拷贝到内核的过程,也就是传说中的"一次拷贝"。我们思考一下,如果想要对端进程来处理这个数据,应该怎么处理呢?

//TODO 链接到一次拷贝

  • a.由内核直接通知对端进程

  • b.对端进程使用消息循环,如果消息来临时,将循环线程唤醒(epoll机制)

在Binder的实现中,正是以消息循环来实现的,那么,在Server端就存在着一个消息循环,此时我们的Server正是ServiceManager,我们在servicemanager的初始化 一节中,我们提到,在收到唤醒的信号之后,最终代码执行到IPCThreadState::handlePolledCommands方法中,我们从这里接着继续分析:

三、Server进程-service_manager的处理逻辑

3.1 Server进程-消息循环的处理-handlePolledCommands

c++ 复制代码
status_t IPCThreadState::handlePolledCommands()

{

    status_t result;

    do {

        result = getAndExecuteCommand();

    } while (mIn.dataPosition() < mIn.dataSize());

    processPendingDerefs();

    //如果还有数据没写入的,做一次刷新

    flushCommands();

    return result;

}

我们主要看getAndExecuteCommand方法:

c++ 复制代码
status_t IPCThreadState::getAndExecuteCommand()

{

    status_t result;

    int32_t cmd;

    //与Driver通信

    result = talkWithDriver();

    if (result >= NO_ERROR) {

        size_t IN = mIn.dataAvail();

        if (IN < sizeof(int32_t)) return result;

        cmd = mIn.readInt32();

        mProcess->mExecutingThreadsCount++;

        if (mProcess->mExecutingThreadsCount >= mProcess->mMaxThreads &&

                mProcess->mStarvationStartTimeMs == 0) {

            mProcess->mStarvationStartTimeMs = uptimeMillis();

        }

        //拿到cmd之后,执行相应的逻辑

        result = executeCommand(cmd);

        mProcess->mExecutingThreadsCount--;

        if (mProcess->mExecutingThreadsCount < mProcess->mMaxThreads &&

                mProcess->mStarvationStartTimeMs != 0) {

            int64_t starvationTimeMs = uptimeMillis() - mProcess->mStarvationStartTimeMs;

            mProcess->mStarvationStartTimeMs = 0;

        }

    }

  


    return result;

}

依然是与Driver进行通信,但是此时我们是read,而不是write了,看下talkWithDriver中read的逻辑:

3.1.1 Server进程-talkWithDriver中的read

c++ 复制代码
status_t IPCThreadState::talkWithDriver(bool doReceive)

{

    binder_write_read bwr;

    // Is the read buffer empty?

    const bool needRead = mIn.dataPosition() >= mIn.dataSize();

    if (doReceive && needRead) {

        bwr.read_size = mIn.dataCapacity();

        bwr.read_buffer = (uintptr_t)mIn.data();

    }

    bwr.write_consumed = 0;

    bwr.read_consumed = 0;

    status_t err;

    do {

        if (ioctl(mProcess->mDriverFD, BINDER_WRITE_READ, &bwr) >= 0)

            err = NO_ERROR;

        if (mProcess->mDriverFD < 0) {

            err = -EBADF;

        }

    } while (err == -EINTR);

    if (err >= NO_ERROR) {

        if (bwr.read_consumed > 0) {

            mIn.setDataSize(bwr.read_consumed);

            mIn.setDataPosition(0);

        }

        return NO_ERROR;

    }

    return err;

}

talkWithDriverread过程中,使用到的主要是mIn,构造binder_write_read的时候,将mInmData,也就是用户空间地址的起始位置,设置给read_buffer,然后进入驱动层:

主要流程我们上文已经分析过了,这里直切主题,直接进入binder_thread_read方法:

3.1.2 切换内核态-binder_thread_read

c++ 复制代码
static int binder_thread_read(struct binder_proc *proc,

                  struct binder_thread *thread,

                  binder_uintptr_t binder_buffer, size_t size,

                  binder_size_t *consumed, int non_block)

{

    void __user *buffer = (void __user *)(uintptr_t)binder_buffer;

    void __user *ptr = buffer + *consumed;

    void __user *end = buffer + size;

  


    int ret = 0;

    int wait_for_proc_work;

  


    //第一次读取,写入BR_NOOP flag

    if (*consumed == 0) {

        if (put_user(BR_NOOP, (uint32_t __user *)ptr))

            return -EFAULT;

        ptr += sizeof(uint32_t);

    }

retry:

    wait_for_proc_work = binder_available_for_proc_work_ilocked(thread);

    thread->looper |= BINDER_LOOPER_STATE_WAITING;

    if (non_block) {

        if (!binder_has_work(thread, wait_for_proc_work))

            ret = -EAGAIN;

    } else {

        ret = binder_wait_for_work(thread, wait_for_proc_work);

    }

    thread->looper &= ~BINDER_LOOPER_STATE_WAITING;

    if (ret)

        return ret;

    while (1) {

        uint32_t cmd;

        struct binder_transaction_data_secctx tr;

        struct binder_transaction_data *trd = &tr.transaction_data;

        struct binder_work *w = NULL;

        struct list_head *list = NULL;

        struct binder_transaction *t = NULL;

        struct binder_thread *t_from;

        size_t trsize = sizeof(*trd);

        if (!binder_worklist_empty_ilocked(&thread->todo))

            list = &thread->todo;

        else if (!binder_worklist_empty_ilocked(&proc->todo) &&

               wait_for_proc_work)

            list = &proc->todo;

        else {

            /* no data added */

            if (ptr - buffer == 4 && !thread->looper_need_return)

                goto retry;

            break;

        }

        if (end - ptr < sizeof(tr) + 4) {

            break;

        }

        //任务出队

        w = binder_dequeue_work_head_ilocked(list);

        if (binder_worklist_empty_ilocked(&thread->todo))

            thread->process_todo = false;

  


        switch (w->type) {

            case BINDER_WORK_TRANSACTION: {

                //根据binder_work获取binder_transaction

                t = container_of(w, struct binder_transaction, work);

            } break;

        }

        //target_node为ServiceManager

        //对transaction_data的target以及cookie等赋值

        if (t->buffer->target_node) {

            struct binder_node *target_node = t->buffer->target_node;

            struct binder_priority node_prio;

       

            trd->target.ptr = target_node->ptr;

            trd->cookie =  target_node->cookie;

            node_prio.sched_policy = target_node->sched_policy;

            node_prio.prio = target_node->min_priority;

            binder_transaction_priority(current, t, node_prio,

                            target_node->inherit_rt);

            cmd = BR_TRANSACTION;

        } else {

            //...

        }

        trd->code = t->code;

        trd->flags = t->flags;

        trd->sender_euid = from_kuid(current_user_ns(), t->sender_euid);

  


        t_from = binder_get_txn_from(t);

        if (t_from) {

            struct task_struct *sender = t_from->proc->tsk;

  


            trd->sender_pid =

                task_tgid_nr_ns(sender,

                        task_active_pid_ns(current));

        } else {

            trd->sender_pid = 0;

        }

        //...

        //对trd赋值,正是在binder_transaction中设置的值

        trd->data_size = t->buffer->data_size;

        trd->offsets_size = t->buffer->offsets_size;

        trd->data.ptr.buffer = (uintptr_t)t->buffer->user_data;

        trd->data.ptr.offsets = trd->data.ptr.buffer +

                    ALIGN(t->buffer->data_size,

                        sizeof(void *));

  


        tr.secctx = t->security_ctx;

        //将cmd写入用户空间,此时cmd为BR_TRANSACTION

        if (put_user(cmd, (uint32_t __user *)ptr)) {

            //...

        }

        ptr += sizeof(uint32_t);

        //将tr写入用户空间,tr是binder_transaction_data的一层包装

        //binder_transaction_data中包含了用户空间buffer的地址

        if (copy_to_user(ptr, &tr, trsize)) {

            //...

        }

        ptr += trsize;

  


        if (t_from)

            binder_thread_dec_tmpref(t_from);

        t->buffer->allow_user_free = 1;

        break;

    }

done:

    *consumed = ptr - buffer;

    //...

    return 0;

}

binder_thread_read中依然做了很多事情,但是由于篇幅问题,我们在讲普通binder通信时来讲这块内容。

结果是我们拿到对端写入的数据,并可以返回用户态了。

3.1.3 切换用户态-回到getAndExecuteCommand

getAndExecuteCommand方法中,先读取cmd,然后根据cmd,进行下一步的操作,进入executeCommand

c++ 复制代码
status_t IPCThreadState::executeCommand(int32_t cmd)

{

    BBinder* obj;

    RefBase::weakref_type* refs;

    status_t result = NO_ERROR;

  


    switch ((uint32_t)cmd) {

        case BR_TRANSACTION_SEC_CTX:

        case BR_TRANSACTION:

            {

                binder_transaction_data_secctx tr_secctx;

                binder_transaction_data& tr = tr_secctx.transaction_data;

                //读取binder_transaction_data_secctx数据

                if (cmd == (int) BR_TRANSACTION_SEC_CTX) {

                    result = mIn.read(&tr_secctx, sizeof(tr_secctx));

                } else {

                    result = mIn.read(&tr, sizeof(tr));

                    tr_secctx.secctx = 0;

                }

  


                Parcel buffer;

                //转换为Parcel对象

                buffer.ipcSetDataReference(

                    reinterpret_cast<const uint8_t*>(tr.data.ptr.buffer),

                    tr.data_size,

                    reinterpret_cast<const binder_size_t*>(tr.data.ptr.offsets),

                    tr.offsets_size/sizeof(binder_size_t), freeBuffer);

                //...

  


                Parcel reply;

                status_t error;

                if (tr.target.ptr) {

                    //试图拿到目标target对应的BBinder对象,然后执行transact

                    if (reinterpret_cast<RefBase::weakref_type*>(

                            tr.target.ptr)->attemptIncStrong(this)) {

                        error = reinterpret_cast<BBinder*>(tr.cookie)->transact(tr.code, buffer,

                                &reply, tr.flags);

                        reinterpret_cast<BBinder*>(tr.cookie)->decStrong(this);

                    } else {

                        error = UNKNOWN_TRANSACTION;

                    }

                } else {

                    //我们是`servermanager`,走这里

                    error = the_context_object->transact(tr.code, buffer, &reply, tr.flags);

                }

                if ((tr.flags & TF_ONE_WAY) == 0) {

                    LOG_ONEWAY("Sending reply to %d!", mCallingPid);

                    if (error < NO_ERROR) reply.setError(error);

  


                    constexpr uint32_t kForwardReplyFlags = TF_CLEAR_BUF;

                    //告诉对端,Binder通信完成

                    sendReply(reply, (tr.flags & kForwardReplyFlags));

                } else {

                    //...

                }

                //...

            }

            break;

    }

    return result;

}

从前面可知,我们现在的cmd就是BR_TRANSACTION,所以略去其他逻辑,主要的逻辑为:

  • 1、读取binder_transaction_data_secctx数据

  • 2、将binder_transaction_data对象转换为Parcel对象

  • 3、拿到BBinder对象,并执行其transact方法

  • 4、告诉对端通信完成

3.2 Server进程-BBinder及其transact

c++ 复制代码
class BBinder : public IBinder

BBinder继承自IBinder,是Binder实体在服务端的实现,也就是我们常说的"服务端",transact是其处理具体逻辑的方法:

c++ 复制代码
status_t BBinder::transact(

    uint32_t code, const Parcel& data, Parcel* reply, uint32_t flags)

{

    data.setDataPosition(0);

    status_t err = NO_ERROR;

    switch (code) {

        case PING_TRANSACTION:

            err = pingBinder();

            break;

        case EXTENSION_TRANSACTION:

            err = reply->writeStrongBinder(getExtension());

            break;

        case DEBUG_PID_TRANSACTION:

            err = reply->writeInt32(getDebugPid());

            break;

        default:

            err = onTransact(code, data, reply, flags);

            break;

    }

    return err;

}

主要是对onTransact一层包装,onTransact是由各个服务自己实现的,我们回到IServiceManager,看下其内部的实现:

3.2.1 Server进程-BnServiceManager::onTransact

c++ 复制代码
::android::status_t BnServiceManager::onTransact(uint32_t _aidl_code, const ::android::Parcel &_aidl_data, ::android::Parcel *_aidl_reply, uint32_t _aidl_flags)

{

    ::android::status_t _aidl_ret_status = ::android::OK;

    switch (_aidl_code)

    {

    case BnServiceManager::TRANSACTION_addService:

        {

            ::std::string in_name;

            ::android::sp<::android::IBinder> in_service;

            bool in_allowIsolated;

            int32_t in_dumpPriority;

            _aidl_ret_status = _aidl_data.readUtf8FromUtf16(&in_name); // 读 name

            if (((_aidl_ret_status) != (::android::OK)))

            {

                break;

            }

            _aidl_ret_status = _aidl_data.readStrongBinder(&in_service); // 读 binder

            if (((_aidl_ret_status) != (::android::OK)))

            {

                break;

            }

            //...

            // 调用真正的 ServiceManager.cpp 中的实现

            ::android::binder::Status _aidl_status(addService(in_name, in_service,in_allowIsolated, in_dumpPriority));

            // addService 返回一个 Status 对象,写到 Parcel 对象 _aidl_reply 中

            _aidl_ret_status = _aidl_status.writeToParcel(_aidl_reply);

            if (((_aidl_ret_status) != (::android::OK)))

            {

                break;

            }

        }

    }

}

BnServiceManageronTransact中,读取了Parcel中的参数,然后调用其addService方法,addService方法由其继承者实现,正是ServiceManager

3.2.2 Server进程-ServiceManager::addService

frameworks\native\cmds\servicemanager\ServiceManager.cpp

c++ 复制代码
Status ServiceManager::addService(const std::string& name, const sp<IBinder>& binder, bool allowIsolated, int32_t dumpPriority) {

    auto ctx = mAccess->getCallingContext();

  


    // implicitly unlinked when the binder is removed

    if (binder->remoteBinder() != nullptr &&

        binder->linkToDeath(sp<ServiceManager>::fromExisting(this)) != OK) {

        return Status::fromExceptionCode(Status::EX_ILLEGAL_STATE);

    }

    //将name与binder存在mNameToService map中

    mNameToService[name] = Service {

        .binder = binder,

        .allowIsolated = allowIsolated,

        .dumpPriority = dumpPriority,

        .debugPid = ctx.debugPid,

    };

    return Status::ok();

}

ServiceManager中,最终将IBinder对象存储到Map中,在ServiceManager中的操作到这里就结束了,后面获取Service的时候,通过name即可。看起来没什么问题了,但是这里的存储的Binder对象是什么呢?我们秉承"打破砂锅问到底"的原则,再来看一遍Binder对象,也就是Service实体是如何传递的,这一次,应该就稍显轻松了。在这之前,我们先分析一下之前省略的一些函数,这些函数也相当重要。

四、一些省略的函数

4.1 Binder对象的"翻译" binder_translate_binder

binder_translate_binder主要是在binder_transaction方法中,类型为BINDER_TYPE_BINDER时,此时,代表传递过来的数据是一个Binder对象,那么它里面做了哪些事情呢?

c++ 复制代码
static int binder_translate_binder(struct flat_binder_object *fp,

                   struct binder_transaction *t,

                   struct binder_thread *thread)

{

    struct binder_node *node;

    struct binder_proc *proc = thread->proc;

    struct binder_proc *target_proc = t->to_proc;

    struct binder_ref_data rdata;

    int ret = 0;

  


    //获取节点

    node = binder_get_node(proc, fp->binder);

    if (!node) {

        //如果节点为空,则新建节点

        node = binder_new_node(proc, fp);

    }

    //增加节点的引用

    ret = binder_inc_ref_for_node(target_proc, node,

            fp->hdr.type == BINDER_TYPE_BINDER,

            &thread->todo, &rdata);

    //如果类型为BINDER_TYPE_BINDER,则转换为BINDER_TYPE_HANDLE

    if (fp->hdr.type == BINDER_TYPE_BINDER)

        fp->hdr.type = BINDER_TYPE_HANDLE;

    else

        fp->hdr.type = BINDER_TYPE_WEAK_HANDLE;

    //将binder及cookie字段置空!!

    //对handle赋值,值为ref的desc,我们认为它唯一即可

    fp->binder = 0;

    fp->handle = rdata.desc;

    fp->cookie = 0;

done:

    binder_put_node(node);

    return ret;

}

这个"翻译"着实做了不少事,

  • 1、首先给Binder实体创建了一个节点

  • 2、然后增加了此节点的引用计数

  • 3、切换typeBINDER_TYPE_HANDLE

  • 4、最后置空bindercookie字段

我们可以思考一下,为什么要置空bindercookie字段呢?

是因为这两个字段指的都是Binder实体的地址,或者引用的地址,那么这个值到另一个进程之后,就完全失去意义了,因为这两个进程的内存空间不是共享的,那么就肯定要置空了,否则就会出现内存地址访问错误。

没了这两个字段,我们该怎么区分每一个Binder对象呢?答案就是上面的handle字段,这个字段会传递到对端,对端到时候也会使用此字段来获取相应的Binder实体,这一点我们在后面也会提到。

4.3 binder节点的新建binder_new_node

刚才提到了Binder节点的新建,我们也来简单看一下里面的逻辑:

c++ 复制代码
static struct binder_node *binder_new_node(struct binder_proc *proc,

                       struct flat_binder_object *fp)

{

    struct binder_node *node;

    struct binder_node *new_node = kzalloc(sizeof(*node), GFP_KERNEL);

    node = binder_init_node_ilocked(proc, new_node, fp);

    return node;

}

申请一块内存,然后调用binder_init_node_ilocked

c++ 复制代码
static struct binder_node *binder_init_node_ilocked(

                        struct binder_proc *proc,

                        struct binder_node *new_node,

                        struct flat_binder_object *fp)

{

    struct rb_node **p = &proc->nodes.rb_node;

    struct rb_node *parent = NULL;

    struct binder_node *node;

    binder_uintptr_t ptr = fp ? fp->binder : 0;

    binder_uintptr_t cookie = fp ? fp->cookie : 0;

    //在红黑树找有没有此节点,如果有的话直接返回

    while (*p) {

        parent = *p;

        node = rb_entry(parent, struct binder_node, rb_node);

        if (ptr < node->ptr)

            p = &(*p)->rb_left;

        else if (ptr > node->ptr)

            p = &(*p)->rb_right;

        else {

            binder_inc_node_tmpref_ilocked(node);

            return node;

        }

    }

    //节点不存在,新建并赋值

    node = new_node;

    node->tmp_refs++;

    node->debug_id = atomic_inc_return(&binder_last_id);

    node->proc = proc;

    node->ptr = ptr;

    node->cookie = cookie;

    node->work.type = BINDER_WORK_NODE;

    //...

    return node;

}

为了避免重复创建的情况,binder_init_node_ilocked还是做了一次红黑树的查找,如果没有找到的话,那么就使用新创建的对象,然后将ptrcookie等字段赋值进去,在后面适当的时候采用。

五、Binder对象是如何传递的

5.1 Binder对象在客户端的写入

SurfaceFlinger为例,我们已经知道,SurfaceFlinger对象本身就是Service,是通过Parcel传递的,在Parcel中,将Service扁平化,主要还是在flattenBinder中:

c++ 复制代码
flat_binder_object obj;

//返回service本身

BBinder *local = binder->localBinder();

obj.binder = reinterpret_cast<uintptr_t>(local->getWeakRefs());

//将service转换为指针,存入cookie

obj.cookie = reinterpret_cast<uintptr_t>(local);

我们将Service的地址存到了flat_binder_object中,flat_binder_object直接参与kernel层的逻辑。

在内核层中

我们刚才看了binder_translate_binder方法,其中将cookiebinder字段置空,并将type类型改为BINDER_TYPE_HANDLE,在上面所述的流程中,这份数据基本上以这个结构传递给了对端,接下来我们看下服务端的读取

5.2 Binder对象在服务端的读取

Binder对象的读取,主要看BnServiceManager对应的处理逻辑:

c++ 复制代码
case BnServiceManager::TRANSACTION_addService:

{

    ::std::string in_name;

    ::android::sp<::android::IBinder> in_service;

    _aidl_ret_status = _aidl_data.readStrongBinder(&in_service); // 读 binder

    if (((_aidl_ret_status) != (::android::OK)))

    {

        break;

    }

}

主要看下readStrongBinder

c++ 复制代码
status_t Parcel::readStrongBinder(sp<IBinder>* val) const

{

    status_t status = readNullableStrongBinder(val);

    if (status == OK && !val->get()) {

        status = UNEXPECTED_NULL;

    }

    return status;

}

readStrongBinder直接调用readNullableStrongBinder

c++ 复制代码
status_t Parcel::readNullableStrongBinder(sp<IBinder>* val) const

{

    return unflattenBinder(val);

}

话不多说:

c++ 复制代码
status_t Parcel::unflattenBinder(sp<IBinder>* out) const

{

    if (isForRpc()) {

        //...

    }

    //读取flat_binder_object数据

    const flat_binder_object* flat = readObject(false);

    if (flat) {

        switch (flat->hdr.type) {

            case BINDER_TYPE_BINDER: {

                sp<IBinder> binder =

                        sp<IBinder>::fromExisting(reinterpret_cast<IBinder*>(flat->cookie));

                return finishUnflattenBinder(binder, out);

            }

            case BINDER_TYPE_HANDLE: {

                sp<IBinder> binder =

                    ProcessState::self()->getStrongProxyForHandle(flat->handle);

                return finishUnflattenBinder(binder, out);

            }

        }

    }

    return BAD_TYPE;

}

通过上面的分析,我们已经知道type是BINDER_TYPE_HANDLE,那么最终binder就是getStrongProxyForHandle

c++ 复制代码
sp<IBinder> ProcessState::getStrongProxyForHandle(int32_t handle)

{

    sp<IBinder> result;

    handle_entry* e = lookupHandleLocked(handle);

    if (e != nullptr) {

        IBinder* b = e->binder;

        if (b == nullptr || !e->refs->attemptIncWeak(this)) {

            //此时handle已经不为0了

            if (handle == 0) {

                //...

            }

            //直接创建BpBinder!

            sp<BpBinder> b = BpBinder::create(handle);

            e->binder = b.get();

            if (b) e->refs = b->getWeakRefs();

            result = b;

        } else {

            //...

        }

    }

    return result;

}

明白了!

拿到handle之后,ServiceManager直接将其包装为BpBinder,那么ServiceManager的map中存的数据也就是BpBinder了。

5.3 Binder对象传递总结

addService中,在客户端,将Service包装为flat_binder_object对象,到内核层之后,驱动将其转换为节点,然后将类型转换为BINDER_TYPE_HANDLE,也就是说,对其他进程来说,该Binder对象只是个句柄,待真正需要使用的时候,再通过驱动进行转换。服务端在处理这次事务时,拿到了节点对应的handle,然后将其转换为BpBinder,也即远端代理,最后ServiceManager将其存在全局的map中,服务端读取结束。

相关推荐
落落落sss19 分钟前
MybatisPlus
android·java·开发语言·spring·tomcat·rabbitmq·mybatis
代码敲上天.40 分钟前
数据库语句优化
android·数据库·adb
GEEKVIP3 小时前
手机使用技巧:8 个 Android 锁屏移除工具 [解锁 Android]
android·macos·ios·智能手机·电脑·手机·iphone
model20055 小时前
android + tflite 分类APP开发-2
android·分类·tflite
彭于晏6895 小时前
Android广播
android·java·开发语言
与衫6 小时前
掌握嵌套子查询:复杂 SQL 中 * 列的准确表列关系
android·javascript·sql
500了12 小时前
Kotlin基本知识
android·开发语言·kotlin
人工智能的苟富贵13 小时前
Android Debug Bridge(ADB)完全指南
android·adb
小雨cc5566ru18 小时前
uniapp+Android面向网络学习的时间管理工具软件 微信小程序
android·微信小程序·uni-app
bianshaopeng19 小时前
android 原生加载pdf
android·pdf