一、说明
本文将以MediaPlayerService的例子来分析Binder的使用:
① ServiceManager
② MediaPlayerService
③ MediaPlayerClient
下文涉及代码均是Android 4.3的源码。
二、MediaService的诞生
MediaService本身运行在MediaServer进程中,我们先看看MediaServer进程的启动:
using namespace android; int main(int argc, char** argv) // 一个进程,必然是一个main函数 { ...... if (...) { ...... } else { // all other services if (doLog) { prctl(PR_SET_PDEATHSIG, SIGKILL); // if parent media.log dies before me, kill me also setpgid(0, 0); // but if I die first, don't kill my parent } sp<ProcessState> proc(ProcessState::self()); // 获得ProcessState实例 sp<IServiceManager> sm = defaultServiceManager(); // 获得ServiceManager对象 ALOGI("ServiceManager: %p", sm.get()); AudioFlinger::instantiate(); // 实例AudioFlinger MediaPlayerService::instantiate(); // 实例MediaPlayerService(本文关注) CameraService::instantiate(); // 实例CameraService AudioPolicyService::instantiate(); // 实例AudioPolicyService registerExtensions(); // 不知道什么东西? ProcessState::self()->startThreadPool(); // 启动Thread Pool,该线程负责与BD直接通信. IPCThreadState::self()->joinThreadPool(); // 阻塞Thread Pool } }
sp,在Android是Strong Pointer,类似于Smart Pointer,帮助程序猿管理对象的分配和释放,类似于iOS中的ARC机制。阅读代码时不用care,当做普通指针即可。例如sp<IServiceManager> 《=等价于=》IServiceManager*
所以,sp<ProcessState> proc(ProcessState::self()) 《=等价于=》ProcessState* proc = ProcessState::self();
2.1 ProcessState
我们首先分析第一句话:
sp<ProcessState> proc(ProcessState::self()); // 获得ProcessState实例
等价于:sp<ProcessState> proc = ProcessState::self();
调用ProcessState的self方法,获取ProcessState的对象,将其赋值给对象指针变量proc。在使用结束后,程序猿无需手动释放proc指向的资源,因为有sp。
我们现在一步一步看看内部函数的调用:
sp<ProcessState> ProcessState::self() { Mutex::Autolock _l(gProcessMutex); if (gProcess != NULL) { return gProcess; } gProcess = new ProcessState; return gProcess; }
// 太简单了,不说了。接着看ProcessState的构造方法
static int open_driver() { int fd = open("/dev/binder", O_RDWR); // 打开设备 if (fd >= 0) { fcntl(fd, F_SETFD, FD_CLOEXEC); int vers; status_t result = ioctl(fd, BINDER_VERSION, &vers); if (result == -1) { ALOGE("Binder ioctl to obtain version failed: %s", strerror(errno)); close(fd); fd = -1; } if (result != 0 || vers != BINDER_CURRENT_PROTOCOL_VERSION) { ALOGE("Binder driver protocol does not match user space protocol!"); close(fd); fd = -1; } size_t maxThreads = 15; result = ioctl(fd, BINDER_SET_MAX_THREADS, &maxThreads); if (result == -1) { ALOGE("Binder ioctl to set max threads failed: %s", strerror(errno)); } } else { ALOGW("Opening '/dev/binder' failed: %s\n", strerror(errno)); } return fd; } ProcessState::ProcessState() : mDriverFD(open_driver()) // 调用open_drvier,无疑,该方法打开的必然是/dev/binder , mVMStart(MAP_FAILED) // 映射内存的起始地址 , mManagesContexts(false) , mBinderContextCheckFunc(NULL) , mBinderContextUserData(NULL) , mThreadPoolStarted(false) , mThreadPoolSeq(1) { if (mDriverFD >= 0) { // XXX Ideally, there should be a specific define for whether we // have mmap (or whether we could possibly have the kernel module // availabla). #if !defined(HAVE_WIN32_IPC) // mmap the binder, providing a chunk of virtual address space to receive transactions.
// 将设备/dev/binder指定的物理内存映射到以mVMStart为起始地址的用户进程;
// 这样,当另外的进程映射同一块物理内存时,该进程的对物理内存的读写在另一进程可见,如此,达到进程通讯的目的
mVMStart = mmap(0, BINDER_VM_SIZE, PROT_READ, MAP_PRIVATE | MAP_NORESERVE, mDriverFD, 0); if (mVMStart == MAP_FAILED) { // *sigh* ALOGE("Using /dev/binder failed: unable to mmap transaction memory.\n"); close(mDriverFD); mDriverFD = -1; } #else mDriverFD = -1; #endif } LOG_ALWAYS_FATAL_IF(mDriverFD < 0, "Binder driver could not be opened. Terminating."); }
分析Binder机制时,还要抓住一个关键点,那就是:Binder的本质就是共享内存。如果对于上述关于BD的操作存在疑问,可以参考后续对于BD详细介绍的文章,看完之后就可以豁然开朗。
总结:Process::self完成两件事:
① 打开/dev/binder设备,相当于建立了binder通信的基础设施
② 映射/dev/binder设备的一块物理内存到当前进程指定的地址即 mVMStart
2.2 defaultServiceManager
/* ./framework/native/libs/binder/IServiceManager.cpp */ sp<IServiceManager> defaultServiceManager() {
// 单例 if (gDefaultServiceManager != NULL) return gDefaultServiceManager; { AutoMutex _l(gDefaultServiceManagerLock); if (gDefaultServiceManager == NULL) { gDefaultServiceManager = interface_cast<IServiceManager>( ProcessState::self()->getContextObject(NULL));// ProcessState::self()还是刚才的ProcessState实例,接着返回去看getContextObject } } return gDefaultServiceManager; }
/* ./framework/native/libs/binder/ProcessState.cpp */ sp<IBinder> ProcessState::getContextObject(const sp<IBinder>& caller) { return getStrongProxyForHandle(0); /* * 这个函数名起的真好:获取一个Proxy,谁的Proxy呢?Handle为0的Proxy, * 那不就是ServiceManager的Proxy. */ }
继续看 getStrongProxyForHandle(0);
sp<IBinder> ProcessState::getStrongProxyForHandle(int32_t handle) { sp<IBinder> result; AutoMutex _l(mLock);
/*
* ProcessState 会维护一个BpBinder的Vector mHandleToObject,首先会根据handle查询mHandleToObject是否已经存在Binder指针,
* 如果不存在,就会创建新的Binder并插入到mHandleToObject中
*/ handle_entry* e = lookupHandleLocked(handle); if (e != NULL) { // We need to create a new BpBinder if there isn't currently one, OR we // are unable to acquire a weak reference on this current one. See comment // in getWeakProxyForHandle() for more info about this. IBinder* b = e->binder; if (b == NULL || !e->refs->attemptIncWeak(this)) { // 第一次进来,肯定为空 b = new BpBinder(handle); // 创建新的BpBinder e->binder = b; if (b) e->refs = b->getWeakRefs(); result = b; } else { // This little bit of nastyness is to allow us to add a primary // reference to the remote proxy when this team doesn't have one // but another team is sending the handle to us. result.force_set(b); e->refs->decWeak(this); } } return result; }
接着我们继续看BpBinder:
BpBinder::BpBinder(int32_t handle) : mHandle(handle) , mAlive(1) , mObitsSent(0) , mObituaries(NULL) { ALOGV("Creating BpBinder %p handle %d\n", this, mHandle); extendObjectLifetime(OBJECT_LIFETIME_WEAK); IPCThreadState::self()->incWeakHandle(handle); // BpBinder居然是在封装IPCThreadState }
继续......
IPCThreadState* IPCThreadState::self() { if (gHaveTLS) { restart: const pthread_key_t k = gTLS; IPCThreadState* st = (IPCThreadState*)pthread_getspecific(k); if (st) return st; return new IPCThreadState; }
/*
* TLS是Thread Local Storage意思,这里只需要知道这种空间每个线程有一个,而且线程间不共享这些空间,好处就是不用Care线程同步问题。
* 在这个线程,我就用这个线程的东西,反正别的线程获取不到其他线程TLS中的数据。(不懂)
*/
/*
* 从线程的本地存储空间获得保存在其中的IPCThreadState对象
*/
if (gShutdown) return NULL; pthread_mutex_lock(&gTLSMutex); if (!gHaveTLS) { if (pthread_key_create(&gTLS, threadDestructor) != 0) { pthread_mutex_unlock(&gTLSMutex); return NULL; } gHaveTLS = true; } pthread_mutex_unlock(&gTLSMutex); goto restart; } /* ...... */ IPCThreadState::IPCThreadState() : mProcess(ProcessState::self()), mMyThreadId(androidGetTid()), mStrictModePolicy(0), mLastTransactionBinderFlags(0) { pthread_setspecific(gTLS, this); clearCaller(); mIn.setDataCapacity(256); mOut.setDataCapacity(256);
/*
* mIn 和 mOut 是Parcel对象,从BD中读出的数据保存到mIn,待写入到BD中的数据保存到mOut中。
*/ }
总结:defaultServiceManager(IServiceManager.cpp)
---> ProcessState::getContextObject ---> ProcessState::getStrongProxyForHandle(0)返回IBinder对象 --->
---> BpBinder实例化
---> IPCThreadState实例化
defaultServiceManager中:
gDefaultServiceManager = interface_cast<IServiceManager>(ProcessState::self()->getContextObject(NULL));
在看这篇文章之前,LZ和作者一样,傻傻地认为interface_cast是强制类型转化。ProcessState::self()->getContextObject(NULL)返回的是IBinder*。
关于interface_cast的详细介绍如下:
template<typename INTERFACE> inline sp<INTERFACE> interface_cast(const sp<IBinder>& obj) { return INTERFACE::asInterface(obj); } // 将模板去掉,上述方法等价于 inline sp<IServiceManager> interface_cast(const sp<IBinder>& obj) { return IServiceManager::asInterface(obj); } // 继续,深入IServiceManager
2.4 IServiceManager
/* framework/native/include/binder/IServiceManager.h */
class IServiceManager : public IInterface { public: DECLARE_META_INTERFACE(ServiceManager); // 刚才追到IServiceManager::asInterface就在这里边,等下我们细看 /** * Retrieve an existing service, blocking for a few seconds * if it doesn't yet exist. */ virtual sp<IBinder> getService( const String16& name) const = 0; /** * Retrieve an existing service, non-blocking. */ virtual sp<IBinder> checkService( const String16& name) const = 0; /** * Register a service. */ virtual status_t addService( const String16& name, const sp<IBinder>& service, bool allowIsolated = false) = 0; /** * Return list of all existing services. */ virtual Vector<String16> listServices() = 0; enum { GET_SERVICE_TRANSACTION = IBinder::FIRST_CALL_TRANSACTION, CHECK_SERVICE_TRANSACTION, ADD_SERVICE_TRANSACTION, LIST_SERVICES_TRANSACTION, }; };
接着看DECLARE_META_INTERFACE:
#define DECLARE_META_INTERFACE(INTERFACE) \ static const android::String16 descriptor; \ static android::sp<I##INTERFACE> asInterface( \ const android::sp<android::IBinder>& obj); \ virtual const android::String16& getInterfaceDescriptor() const; \ I##INTERFACE(); \ virtual ~I##INTERFACE(); \
我们将ServiceManager代入:
#define DECLARE_META_INTERFACE(ServiceManager) \ static const android::String16 descriptor; \ static android::sp<IServiceManager> asInterface( \ const android::sp<android::IBinder>& obj); \ virtual const android::String16& getInterfaceDescriptor() const; \ IServiceManager(); \ virtual ~IServiceManager(); \
/*
* 分析:
* DECLARE_META_INTERFACE:很明显通过宏定义的方式,为IServiceManager类增加了一个变量:descriptor,
* 增加了一个方法:asInterface;另一个方法:getInterfaceDescriptor;以及IServiceManager的构造方法和析构方法的定义
* DECLARE_META_INTERFACE:用宏定义的方式增加一些变量及方法的定义。那么会不会还会用宏定义的方式来实现所定义的方法呢?这就是IMPLEMENT_META_INTERFACE。
*/
在 ./framework/native/libs/binder/IServiceManager.cpp中很容易发现:IMPLEMENT_META_INTERFACE:
/* framework/native/libs/binder/IServiceManager.cpp */ IMPLEMENT_META_INTERFACE(ServiceManager, "android.os.IServiceManager"); /* framework/native/include/binder/IInterface.h */ #define IMPLEMENT_META_INTERFACE(INTERFACE, NAME) \ const android::String16 I##INTERFACE::descriptor(NAME); \ const android::String16& \ I##INTERFACE::getInterfaceDescriptor() const { \ return I##INTERFACE::descriptor; \ } \ android::sp<I##INTERFACE> I##INTERFACE::asInterface( \ const android::sp<android::IBinder>& obj) \ { \ android::sp<I##INTERFACE> intr; \ if (obj != NULL) { \ intr = static_cast<I##INTERFACE*>( \ obj->queryLocalInterface( \ I##INTERFACE::descriptor).get()); \ if (intr == NULL) { \ intr = new Bp##INTERFACE(obj); \ } \ } \ return intr; \ } \ I##INTERFACE::I##INTERFACE() { } \ I##INTERFACE::~I##INTERFACE() { } \
我们将宏调用代入进入将会更加直观:
#define IMPLEMENT_META_INTERFACE(ServiceManager, “android.os.IServiceManager”) \ const android::String16 IServiceManager::descriptor(“android.os.IServiceManager”); \ const android::String16& \ IServiceManager::getInterfaceDescriptor() const { \ return IServiceManager::descriptor; \ } \ android::sp<IServiceManager> IServiceManager::asInterface( \ const android::sp<android::IBinder>& obj) \ { \ android::sp<IServiceManager> intr; \ if (obj != NULL) { \ intr = static_cast<IServiceManager*>( \ obj->queryLocalInterface( \ IServiceManager::descriptor).get()); \ if (intr == NULL) { \ intr = new BpServiceManager(obj); \ } \ } \ return intr; \ } \ IServiceManager::IServiceManager() { } \ IServiceManager::~IServiceManager() { } \
/*
* 我们先回头看看上边是怎么调用的:
* gDefaultServiceManager = interface_cast<IServiceManager>(ProcessState::self()->getContextObject(NULL));
* --> interface_cast<IServiceManager>(new BpBinder(0));
* --> IServiceManager::asInterface(new BpBinder(0));
* 是不是一目了然??!!
* 所以: gDefaultServiceManager = new BpServiceManager(new BpBinder(0));
* 我们想想为什么这么干呢?!IServiceManager的最终目的就是封装BpServiceManager;所以IServiceManager是上层可以看到的。
*
2.5 BpServiceManager
p是proxy的意思,Bp就是BinderProxy的意思,BpServiceManager就是SM的代理。既然是代理,肯定希望是对用户可见的。那么BpServiceManager的用户是谁呢?就是IServiceManager,也就是说,IServiceManager作为BpServiceManager的父类,会将BpServiceManager封装,用户拿到的就是IServiceManager。
/* framework/native/libs/binder/IServiceManager.cpp */
class BpServiceManager : public BpInterface<IServiceManager> // 同时继承BpInterface和IServiceManager { public: BpServiceManager(const sp<IBinder>& impl) // 这个impl就是我们传入的BpBinder(0) : BpInterface<IServiceManager>(impl) { } virtual sp<IBinder> getService(const String16& name) const { unsigned n; for (n = 0; n < 5; n++){ sp<IBinder> svc = checkService(name); if (svc != NULL) return svc; ALOGI("Waiting for service %s...\n", String8(name).string()); sleep(1); } return NULL; } virtual sp<IBinder> checkService( const String16& name) const { Parcel data, reply; data.writeInterfaceToken(IServiceManager::getInterfaceDescriptor()); data.writeString16(name); remote()->transact(CHECK_SERVICE_TRANSACTION, data, &reply); return reply.readStrongBinder(); } virtual status_t addService(const String16& name, const sp<IBinder>& service, bool allowIsolated) { Parcel data, reply; data.writeInterfaceToken(IServiceManager::getInterfaceDescriptor()); data.writeString16(name); data.writeStrongBinder(service); data.writeInt32(allowIsolated ? 1 : 0); status_t err = remote()->transact(ADD_SERVICE_TRANSACTION, data, &reply); return err == NO_ERROR ? reply.readExceptionCode() : err; } virtual Vector<String16> listServices() { Vector<String16> res; int n = 0; for (;;) { Parcel data, reply; data.writeInterfaceToken(IServiceManager::getInterfaceDescriptor()); data.writeInt32(n++); status_t err = remote()->transact(LIST_SERVICES_TRANSACTION, data, &reply); if (err != NO_ERROR) break; res.add(reply.readString16()); } return res; } };
上文中,BpInterface<IServiceManager>(impl)调用了BpInterface的构造方法,impl就是BpBinder(0),我们看看BpBinder的构造:
template<typename INTERFACE> inline BpInterface<INTERFACE>::BpInterface(const sp<IBinder>& remote) : BpRefBase(remote) { } // 我们将上述调用语句代入 inline BpInterface<IServiceManager>::BpInterface(const sp<IBinder>& remote)
: BpRefBase(remote) { }
/*
* 这里的remote就是BpBinder(0),这样remote()方法返回就是该remote。
*/
接着我们看看BpRefBase:
/* ./framework/native/include/binder/Binder.h */
class BpRefBase : public virtual RefBase { protected: BpRefBase(const sp<IBinder>& o); virtual ~BpRefBase(); virtual void onFirstRef(); virtual void onLastStrongRef(const void* id); virtual bool onIncStrongAttempted(uint32_t flags, const void* id); inline IBinder* remote() { return mRemote; } inline IBinder* remote() const { return mRemote; } private: BpRefBase(const BpRefBase& o); BpRefBase& operator=(const BpRefBase& o); IBinder* const mRemote; RefBase::weakref_type* mRefs; volatile int32_t mState; };
/* framework/native/libs/binder/Binder.cpp */ BpRefBase::BpRefBase(const sp<IBinder>& o) // 该参数就是我们刚才在构造BpServiceManager时传入的参数!就是BpBinder(0) : mRemote(o.get()), mRefs(NULL), mState(0) // o.get() :该方法是强指针提供的方法。见代码:framework/native/include/utils/StrongPointer.h. { // o.get() : 获取到的就是指向 o 本身的指针,就是 BpBinder(0), 这里将之赋值给 mRemote 。
extendObjectLifetime(OBJECT_LIFETIME_WEAK); if (mRemote) { mRemote->incStrong(this); // Removed on first IncStrong(). mRefs = mRemote->createWeak(this); // Held for our entire lifetime. } } BpRefBase::~BpRefBase() { if (mRemote) { if (!(mState&kRemoteAcquired)) { mRemote->decStrong(this); } mRefs->decWeak(this); } } void BpRefBase::onFirstRef() { android_atomic_or(kRemoteAcquired, &mState); } void BpRefBase::onLastStrongRef(const void* id) { if (mRemote) { mRemote->decStrong(this); } } bool BpRefBase::onIncStrongAttempted(uint32_t flags, const void* id) { return mRemote ? mRefs->attemptIncStrong(this) : false; }
所以我们在 BpServiceManager 类中看到的函数 remote() 其实就是 BpBinder(0) 。
就是这么绕!!
OK,到此我们知道了sp<IServiceManager> sm = defaultServiceManager();返回的实际上是BpServiceManager,它的remote对象就是BpBinder,handle参数为0.
我们再回头看MediaServer启动的代码:
int main(int argc, char** argv) { ..... if (doLog && (childPid = fork()) != 0) { ...... } else { // all other services ...... sp<ProcessState> proc(ProcessState::self()); sp<IServiceManager> sm = defaultServiceManager(); // 到目前为止,我们只是拿到了ServiceManager的代理BpServiceManager。 ALOGI("ServiceManager: %p", sm.get()); AudioFlinger::instantiate(); // 我们暂时跳过AudioFlinger,看MediaPlayerService MediaPlayerService::instantiate(); CameraService::instantiate(); AudioPolicyService::instantiate(); registerExtensions(); ProcessState::self()->startThreadPool(); IPCThreadState::self()->joinThreadPool(); } }
2.6 MediaPlayerService
/* frameworks/av/media/libmediaplayerservice/MediaPlayerService.cpp */ void MediaPlayerService::instantiate() { defaultServiceManager()->addService( // defauleServiceManager返回的就是我们刚才分析的BpServiceManager。 String16("media.player"), new MediaPlayerService()); }
// 嗯!!把MediaPlayerService通过addService方法交给SM管理、
MediaPlayerService是由BnMediaPlayerService派生的:
class MediaPlayerService : public BnMediaPlayerService
出现了很多BnXXX, BpXXX,其实逻辑是很清晰的。Bn是Binder Native的缩写,Bp是Binder Proxy的缩写。他们是相对的。每个Bp的背后一定站着一个Bn。
BpServiceManager相对的是BnServiceManager:BpServiceManager是给用户用的,后台是BnServiceManager。
BnMediaPlayerService相对的是BpMediaPlayerService:我们目前创建了BnServiceManager,将之交给SM管理。那么用户想要访问MediaPlayerService怎么办呢?没错,就是通过BpMediaPlayerService。
至于为什么搞个ServiceManager出来,我们在上一篇文章中,类比了TCP/IP模型,已经讲过SM的作用,这里不再赘述。
此时,我们回头看看addService
/* frameworks/native/libs/binder/IServiceManager.cpp */
class BpServiceManager : public BpInterface<IServiceManager>
{
......
virtual status_t addService(const String16& name, const sp<IBinder>& service, bool allowIsolated) { Parcel data, reply; // data是将要发送到BnServiceManager的命令包 // 先把interface的名字写进去:android.os.IServiceManager data.writeInterfaceToken(IServiceManager::getInterfaceDescriptor()); // 再把name写进去:media.play data.writeString16(name); // 把新服务:MediaPlayerService写到命令中 data.writeStrongBinder(service); data.writeInt32(allowIsolated ? 1 : 0); // 调用BpBinder的transact发送到BnServiceManager端 status_t err = remote()->transact(ADD_SERVICE_TRANSACTION, data, &reply); return err == NO_ERROR ? reply.readExceptionCode() : err; }
}
看来这个transact函数至关重要!!!!!!
/* frameworks/native/libs/binder/BpBinder.cpp */ status_t BpBinder::transact( uint32_t code, const Parcel& data, Parcel* reply, uint32_t flags) { // Once a binder has died, it will never come back to life. if (mAlive) { status_t status = IPCThreadState::self()->transact( mHandle, code, data, reply, flags); if (status == DEAD_OBJECT) mAlive = 0; return status; } return DEAD_OBJECT; }
/* frameworks/native/libs/binder/IPCThreadState.cpp */ status_t IPCThreadState::transact(int32_t handle, uint32_t code, const Parcel& data, Parcel* reply, uint32_t flags) { status_t err = data.errorCheck(); flags |= TF_ACCEPT_FDS; IF_LOG_TRANSACTIONS() { TextOutput::Bundle _b(alog); alog << "BC_TRANSACTION thr " << (void*)pthread_self() << " / hand " << handle << " / code " << TypeCode(code) << ": " << indent << data << dedent << endl; } if (err == NO_ERROR) { LOG_ONEWAY(">>>> SEND from pid %d uid %d %s", getpid(), getuid(), (flags & TF_ONE_WAY) == 0 ? "READ REPLY" : "ONE WAY"); err = writeTransactionData(BC_TRANSACTION, flags, handle, code, data, NULL); // 通过该方法发送数据 } if (err != NO_ERROR) { if (reply) reply->setError(err); return (mLastError = err); } if ((flags & TF_ONE_WAY) == 0) { #if 0 if (code == 4) { // relayout ALOGI(">>>>>> CALLING transaction 4"); } else { ALOGI(">>>>>> CALLING transaction %d", code); } #endif if (reply) { err = waitForResponse(reply); } else { Parcel fakeReply; err = waitForResponse(&fakeReply); } #if 0 if (code == 4) { // relayout ALOGI("<<<<<< RETURNING transaction 4"); } else { ALOGI("<<<<<< RETURNING transaction %d", code); } #endif IF_LOG_TRANSACTIONS() { TextOutput::Bundle _b(alog); alog << "BR_REPLY thr " << (void*)pthread_self() << " / hand " << handle << ": "; if (reply) alog << indent << *reply << dedent << endl; else alog << "(none requested)" << endl; } } else { err = waitForResponse(NULL, NULL); } return err; }
/* frameworks/native/libs/binder/IPCThreadState.cpp */ status_t IPCThreadState::writeTransactionData(int32_t cmd, uint32_t binderFlags, int32_t handle, uint32_t code, const Parcel& data, status_t* statusBuffer) { binder_transaction_data tr; tr.target.handle = handle; tr.code = code; tr.flags = binderFlags; tr.cookie = 0; tr.sender_pid = 0; tr.sender_euid = 0; const status_t err = data.errorCheck(); if (err == NO_ERROR) { tr.data_size = data.ipcDataSize(); tr.data.ptr.buffer = data.ipcData(); tr.offsets_size = data.ipcObjectsCount()*sizeof(size_t); tr.data.ptr.offsets = data.ipcObjects(); } else if (statusBuffer) { tr.flags |= TF_STATUS_CODE; *statusBuffer = err; tr.data_size = sizeof(status_t); tr.data.ptr.buffer = statusBuffer; tr.offsets_size = 0; tr.data.ptr.offsets = NULL; } else { return (mLastError = err); }
// 上述代码做了一件事,把我们传递的数据封装到 binder_transaction_data
mOut.writeInt32(cmd); mOut.write(&tr, sizeof(tr));
// 把binder_transaction_data放到mOut中。此时并没有写到/dev/binder中,我们继续往下看。
// 此时回忆上篇文章写过的一句话,终于可以理解一半了。
/*
* ProcessState有两个Parcel成员,mIn和mOut,Pool Thread会不停的查询BD中是否有数据可读,如果有将其读出并保存到mIn中,同时不停的检查mOut是否有数据需要向BD发送,如果有,将其内容写 到BD中。
* 总之:从BD中读出的数据保存到mIn,待写入到BD中的数据保存到mOut中。
*/
return NO_ERROR; }
/* frameworks/native/libs/binder/IPCThreadState.cpp */ status_t IPCThreadState::waitForResponse(Parcel *reply, status_t *acquireResult) { int32_t cmd; int32_t err; while (1) { if ((err=talkWithDriver()) < NO_ERROR) break; // talkWithDriver和driver交谈。 err = mIn.errorCheck(); if (err < NO_ERROR) break; if (mIn.dataAvail() == 0) continue;
// 在talkWithDriver之后,获取mIn中数据。说明talkWithDriver把mOut数据发给Drvier,并从Driver中读取数据放到mIn中
cmd = mIn.readInt32(); IF_LOG_COMMANDS() { alog << "Processing waitForResponse Command: " << getReturnString(cmd) << endl; } switch (cmd) { case BR_TRANSACTION_COMPLETE: if (!reply && !acquireResult) goto finish; break; case BR_DEAD_REPLY: err = DEAD_OBJECT; goto finish; case BR_FAILED_REPLY: err = FAILED_TRANSACTION; goto finish; case BR_ACQUIRE_RESULT: { ALOG_ASSERT(acquireResult != NULL, "Unexpected brACQUIRE_RESULT"); const int32_t result = mIn.readInt32(); if (!acquireResult) continue; *acquireResult = result ? NO_ERROR : INVALID_OPERATION; } goto finish; case BR_REPLY: { binder_transaction_data tr; err = mIn.read(&tr, sizeof(tr)); ALOG_ASSERT(err == NO_ERROR, "Not enough command data for brREPLY"); if (err != NO_ERROR) goto finish; if (reply) { if ((tr.flags & TF_STATUS_CODE) == 0) { reply->ipcSetDataReference( reinterpret_cast<const uint8_t*>(tr.data.ptr.buffer), tr.data_size, reinterpret_cast<const size_t*>(tr.data.ptr.offsets), tr.offsets_size/sizeof(size_t), freeBuffer, this); } else { err = *static_cast<const status_t*>(tr.data.ptr.buffer); freeBuffer(NULL, reinterpret_cast<const uint8_t*>(tr.data.ptr.buffer), tr.data_size, reinterpret_cast<const size_t*>(tr.data.ptr.offsets), tr.offsets_size/sizeof(size_t), this); } } else { freeBuffer(NULL, reinterpret_cast<const uint8_t*>(tr.data.ptr.buffer), tr.data_size, reinterpret_cast<const size_t*>(tr.data.ptr.offsets), tr.offsets_size/sizeof(size_t), this); continue; } } goto finish; default: err = executeCommand(cmd); if (err != NO_ERROR) goto finish; break; } } finish: if (err != NO_ERROR) { if (acquireResult) *acquireResult = err; if (reply) reply->setError(err); mLastError = err; } return err; }
/* frameworks/native/libs/binder/IPCThreadState.cpp */ status_t IPCThreadState::talkWithDriver(bool doReceive) { if (mProcess->mDriverFD <= 0) { return -EBADF; } binder_write_read bwr; // Is the read buffer empty? const bool needRead = mIn.dataPosition() >= mIn.dataSize(); // We don't want to write anything if we are still reading // from data left in the input buffer and the caller // has requested to read the next data. const size_t outAvail = (!doReceive || needRead) ? mOut.dataSize() : 0; bwr.write_size = outAvail; bwr.write_buffer = (long unsigned int)mOut.data(); // This is what we'll read. if (doReceive && needRead) { bwr.read_size = mIn.dataCapacity(); bwr.read_buffer = (long unsigned int)mIn.data(); } else { bwr.read_size = 0; bwr.read_buffer = 0; } IF_LOG_COMMANDS() { TextOutput::Bundle _b(alog); if (outAvail != 0) { alog << "Sending commands to driver: " << indent; const void* cmds = (const void*)bwr.write_buffer; const void* end = ((const uint8_t*)cmds)+bwr.write_size; alog << HexDump(cmds, bwr.write_size) << endl; while (cmds < end) cmds = printCommand(alog, cmds); alog << dedent; } alog << "Size of receive buffer: " << bwr.read_size << ", needRead: " << needRead << ", doReceive: " << doReceive << endl; } // Return immediately if there is nothing to do. if ((bwr.write_size == 0) && (bwr.read_size == 0)) return NO_ERROR; bwr.write_consumed = 0; bwr.read_consumed = 0; status_t err; do { IF_LOG_COMMANDS() { alog << "About to read/write, write size = " << mOut.dataSize() << endl; } #if defined(HAVE_ANDROID_OS)
// 把数据封装到bwr中,然后调用ioctrl发送给Driver if (ioctl(mProcess->mDriverFD, BINDER_WRITE_READ, &bwr) >= 0) err = NO_ERROR; else err = -errno; #else err = INVALID_OPERATION; #endif if (mProcess->mDriverFD <= 0) { err = -EBADF; } IF_LOG_COMMANDS() { alog << "Finished read/write, write size = " << mOut.dataSize() << endl; } } while (err == -EINTR); IF_LOG_COMMANDS() { alog << "Our err: " << (void*)err << ", write consumed: " << bwr.write_consumed << " (of " << mOut.dataSize() << "), read consumed: " << bwr.read_consumed << endl; } if (err >= NO_ERROR) {
// 回复数据就在bwr中,bwr中回复数据的buffer就是mIn提供的 if (bwr.write_consumed > 0) { if (bwr.write_consumed < (ssize_t)mOut.dataSize()) mOut.remove(0, bwr.write_consumed); else mOut.setDataSize(0); } if (bwr.read_consumed > 0) { mIn.setDataSize(bwr.read_consumed); mIn.setDataPosition(0); } IF_LOG_COMMANDS() { TextOutput::Bundle _b(alog); alog << "Remaining data size: " << mOut.dataSize() << endl; alog << "Received commands from driver: " << indent; const void* cmds = mIn.data(); const void* end = mIn.data() + mIn.dataSize(); alog << HexDump(cmds, mIn.dataSize()) << endl; while (cmds < end) cmds = printReturnCommand(alog, cmds); alog << dedent; } return NO_ERROR; } return err; }
到此时,addService流程就结束了。
这里有一个容易搞晕的地方:MediaPlayerService是一个BnMediaPlayerService,那么它是不是应该等着BpMediaPlayerService来和它交互呢?但是在MediaPlayerService中又没有看到操作/dev/binder的地方,所以带着这个疑问?!!
我们先看BnMediaPlayerService。
2.8 BnServiceManager
关于BnServiceManager的,目前我仍然存在疑问。
该问作者说BnServiceManager不存在,但是我在阅读Android4.3的源码时,在IServiceManager.cpp中是定义了BnServiceManager的。但是我没有找到调用BnServiceManager的地方。所以真正去做BnServiceManager该做的事是:
/* frameworks/native/cmds/servicemanager/service_manager.c */ int do_add_service(struct binder_state *bs, uint16_t *s, unsigned len, void *ptr, unsigned uid, int allow_isolated) { struct svcinfo *si; //ALOGI("add_service('%s',%p,%s) uid=%d\n", str8(s), ptr, // allow_isolated ? "allow_isolated" : "!allow_isolated", uid); if (!ptr || (len == 0) || (len > 127)) return -1; if (!svc_can_register(uid, s)) { ALOGE("add_service('%s',%p) uid=%d - PERMISSION DENIED\n", str8(s), ptr, uid); return -1; } si = find_svc(s, len); if (si) { if (si->ptr) { ALOGE("add_service('%s',%p) uid=%d - ALREADY REGISTERED, OVERRIDE\n", str8(s), ptr, uid); svcinfo_death(bs, si); } si->ptr = ptr; } else { si = malloc(sizeof(*si) + (len + 1) * sizeof(uint16_t)); if (!si) { ALOGE("add_service('%s',%p) uid=%d - OUT OF MEMORY\n", str8(s), ptr, uid); return -1; } si->ptr = ptr; si->len = len; memcpy(si->name, s, (len + 1) * sizeof(uint16_t)); si->name[len] = '\0'; si->death.func = svcinfo_death; si->death.ptr = si; si->allow_isolated = allow_isolated; si->next = svclist; svclist = si; } binder_acquire(bs, ptr); binder_link_to_death(bs, ptr, &si->death); return 0; } int svcmgr_handler(struct binder_state *bs, struct binder_txn *txn, struct binder_io *msg, struct binder_io *reply) { struct svcinfo *si; uint16_t *s; unsigned len; void *ptr; uint32_t strict_policy; int allow_isolated; // ALOGI("target=%p code=%d pid=%d uid=%d\n", // txn->target, txn->code, txn->sender_pid, txn->sender_euid); if (txn->target != svcmgr_handle) return -1; // Equivalent to Parcel::enforceInterface(), reading the RPC // header with the strict mode policy mask and the interface name. // Note that we ignore the strict_policy and don't propagate it // further (since we do no outbound RPCs anyway). strict_policy = bio_get_uint32(msg); s = bio_get_string16(msg, &len); if ((len != (sizeof(svcmgr_id) / 2)) || memcmp(svcmgr_id, s, sizeof(svcmgr_id))) { fprintf(stderr,"invalid id %s\n", str8(s)); return -1; } switch(txn->code) { case SVC_MGR_GET_SERVICE: case SVC_MGR_CHECK_SERVICE: s = bio_get_string16(msg, &len); ptr = do_find_service(bs, s, len, txn->sender_euid); if (!ptr) break; bio_put_ref(reply, ptr); return 0; case SVC_MGR_ADD_SERVICE: s = bio_get_string16(msg, &len); ptr = bio_get_ref(msg); allow_isolated = bio_get_uint32(msg) ? 1 : 0; if (do_add_service(bs, s, len, ptr, txn->sender_euid, allow_isolated)) return -1; break; case SVC_MGR_LIST_SERVICES: { unsigned n = bio_get_uint32(msg); si = svclist; while ((n-- > 0) && si) si = si->next; if (si) { bio_put_string16(reply, si->name); return 0; } return -1; } default: ALOGE("unknown code %d\n", txn->code); return -1; } bio_put_uint32(reply, 0); return 0; } int main(int argc, char **argv) { struct binder_state *bs; void *svcmgr = BINDER_SERVICE_MANAGER; bs = binder_open(128*1024); if (binder_become_context_manager(bs)) { ALOGE("cannot become context manager (%s)\n", strerror(errno)); return -1; } svcmgr_handle = svcmgr; binder_loop(bs, svcmgr_handler); return 0; }
/*
* 接收命令,解析命令,执行命令
*/
2.9 ServiceManager存在的意义
在Android系统中,所有的Service的信息都要先add到ServiceManager中,有ServiceManager来集中管理,这样系统就可以查询当前系统中存在哪些服务。而且Android系统中某个服务如MediaPlayerService的客户端想要和MediaPlayerService段通信的话,必须先向ServiceManager查询MediaPlayerService的信息,然后通过ServiceManager返回的信息来和MediaPlayerService通信。
1. MediaPlayerService向SM注册
2. MediaPlayerClient向SM查询MediaPlayerService的信息
3. MediaPlayerClient根据查询信息和MediaPlayerService通信
ServiceManager的handle为0,所以只要向handle为0的服务发送消息,最终都会被传递到ServiceManager。
总结MediaPlayerService的运行:
1. defaultServiceManager获得了BpServiceManager
2. MediaPlayerService实例化,调用BpServiceManager的addService方法,将MediaPlayerService加入到SM中管理。
3. service_manager中binder_looper函数,不断从binder中接收请求、并处理、
既然MediaPlayerService并没有打开binder设备,那么我们看看它的父类即BnXXX做了什么事?
3.1 MediaPlayerService打开binder
// MediaPlayerSevice从BnMediaPlayerService派生 class MediaPlayerService : public BnMediaPlayerService { ....... }; // BnMediaPlayerService继承自BnInterface和IMediaPlayerService class BnMediaPlayerService: public BnInterface<IMediaPlayerService> { public: virtual status_t onTransact( uint32_t code, const Parcel& data, Parcel* reply, uint32_t flags = 0); };
// 在main_mediaserver.cpp中,ProcesState的构造方法中已经打开过binder设备了。
3.2 Looper
原来打开Binder设备和进程相关,一个进程打开一个Binder设备就可以了。
那么在哪里进行消息循环呢?
ProcessState::self()->startThreadPool();
IPCThreadState::self()->joinThreadPool();
/* frameworks/native/libs/binder/ProcessState.cpp */ void ProcessState::startThreadPool() { AutoMutex _l(mLock); if (!mThreadPoolStarted) { mThreadPoolStarted = true; spawnPooledThread(true); } } void ProcessState::spawnPooledThread(bool isMain) { if (mThreadPoolStarted) { String8 name = makeBinderThreadName(); ALOGV("Spawning new pooled thread, name=%s\n", name.string()); sp<Thread> t = new PoolThread(isMain); t->run(name.string()); } } // new PoolThread,我们看看PoolThread class PoolThread : public Thread { public: PoolThread(bool isMain) : mIsMain(isMain) { } protected: virtual bool threadLoop() {
// mIsMain为true
// 注意:此时threadLoop为新的线程,所以IPCThreadState::self()也是一个新的IPCThreadState实例(线程本地存储TLS????) IPCThreadState::self()->joinThreadPool(mIsMain); return false; } const bool mIsMain; };
/* frameworks/native/libs/binder/IPCThreadState.cpp */
// 主线程和工作线程都调用了joinThreadPool
void IPCThreadState::joinThreadPool(bool isMain) { LOG_THREADPOOL("**** THREAD %p (PID %d) IS JOINING THE THREAD POOL\n", (void*)pthread_self(), getpid()); mOut.writeInt32(isMain ? BC_ENTER_LOOPER : BC_REGISTER_LOOPER); // This thread may have been spawned by a thread that was in the background // scheduling group, so first we will make sure it is in the foreground // one to avoid performing an initial transaction in the background. set_sched_policy(mMyThreadId, SP_FOREGROUND); status_t result; do { int32_t cmd; // When we've cleared the incoming command queue, process any pending derefs if (mIn.dataPosition() >= mIn.dataSize()) { size_t numPending = mPendingWeakDerefs.size(); if (numPending > 0) { for (size_t i = 0; i < numPending; i++) { RefBase::weakref_type* refs = mPendingWeakDerefs[i]; refs->decWeak(mProcess.get()); } mPendingWeakDerefs.clear(); } numPending = mPendingStrongDerefs.size(); if (numPending > 0) { for (size_t i = 0; i < numPending; i++) { BBinder* obj = mPendingStrongDerefs[i]; obj->decStrong(mProcess.get()); } mPendingStrongDerefs.clear(); } } // now get the next command to be processed, waiting if necessary result = talkWithDriver(); if (result >= NO_ERROR) { size_t IN = mIn.dataAvail(); if (IN < sizeof(int32_t)) continue; cmd = mIn.readInt32(); IF_LOG_COMMANDS() { alog << "Processing top-level Command: " << getReturnString(cmd) << endl; } result = executeCommand(cmd); } else if (result != TIMED_OUT && result != -ECONNREFUSED && result != -EBADF) { ALOGE("talkWithDriver(fd=%d) returned unexpected error %d, aborting", mProcess->mDriverFD, result); abort(); } // After executing the command, ensure that the thread is returned to the // foreground cgroup before rejoining the pool. The driver takes care of // restoring the priority, but doesn't do anything with cgroups so we // need to take care of that here in userspace. Note that we do make // sure to go in the foreground after executing a transaction, but // there are other callbacks into user code that could have changed // our group so we want to make absolutely sure it is put back. set_sched_policy(mMyThreadId, SP_FOREGROUND); // Let this thread exit the thread pool if it is no longer // needed and it is not the main process thread. if(result == TIMED_OUT && !isMain) { break; } } while (result != -ECONNREFUSED && result != -EBADF); LOG_THREADPOOL("**** THREAD %p (PID %d) IS LEAVING THE THREAD POOL err=%p\n", (void*)pthread_self(), getpid(), (void*)result); mOut.writeInt32(BC_EXIT_LOOPER); talkWithDriver(false); }
// 在做Loop,但是好像两个线程都执行了这个,这里有两个消息循环???
status_t IPCThreadState::executeCommand(int32_t cmd) { BBinder* obj; RefBase::weakref_type* refs; status_t result = NO_ERROR; switch (cmd) { case BR_ERROR: result = mIn.readInt32(); break; case BR_OK: break; case BR_ACQUIRE: refs = (RefBase::weakref_type*)mIn.readInt32(); obj = (BBinder*)mIn.readInt32(); ALOG_ASSERT(refs->refBase() == obj, "BR_ACQUIRE: object %p does not match cookie %p (expected %p)", refs, obj, refs->refBase()); obj->incStrong(mProcess.get()); IF_LOG_REMOTEREFS() { LOG_REMOTEREFS("BR_ACQUIRE from driver on %p", obj); obj->printRefs(); } mOut.writeInt32(BC_ACQUIRE_DONE); mOut.writeInt32((int32_t)refs); mOut.writeInt32((int32_t)obj); break; case BR_RELEASE: refs = (RefBase::weakref_type*)mIn.readInt32(); obj = (BBinder*)mIn.readInt32(); ALOG_ASSERT(refs->refBase() == obj, "BR_RELEASE: object %p does not match cookie %p (expected %p)", refs, obj, refs->refBase()); IF_LOG_REMOTEREFS() { LOG_REMOTEREFS("BR_RELEASE from driver on %p", obj); obj->printRefs(); } mPendingStrongDerefs.push(obj); break; case BR_INCREFS: refs = (RefBase::weakref_type*)mIn.readInt32(); obj = (BBinder*)mIn.readInt32(); refs->incWeak(mProcess.get()); mOut.writeInt32(BC_INCREFS_DONE); mOut.writeInt32((int32_t)refs); mOut.writeInt32((int32_t)obj); break; case BR_DECREFS: refs = (RefBase::weakref_type*)mIn.readInt32(); obj = (BBinder*)mIn.readInt32(); // NOTE: This assertion is not valid, because the object may no // longer exist (thus the (BBinder*)cast above resulting in a different // memory address). //ALOG_ASSERT(refs->refBase() == obj, // "BR_DECREFS: object %p does not match cookie %p (expected %p)", // refs, obj, refs->refBase()); mPendingWeakDerefs.push(refs); break; case BR_ATTEMPT_ACQUIRE: refs = (RefBase::weakref_type*)mIn.readInt32(); obj = (BBinder*)mIn.readInt32(); { const bool success = refs->attemptIncStrong(mProcess.get()); ALOG_ASSERT(success && refs->refBase() == obj, "BR_ATTEMPT_ACQUIRE: object %p does not match cookie %p (expected %p)", refs, obj, refs->refBase()); mOut.writeInt32(BC_ACQUIRE_RESULT); mOut.writeInt32((int32_t)success); } break; case BR_TRANSACTION: { binder_transaction_data tr; result = mIn.read(&tr, sizeof(tr));
//来了一个命令,解析成BR_TRANSACTION,然后读取后续的信息 ALOG_ASSERT(result == NO_ERROR, "Not enough command data for brTRANSACTION"); if (result != NO_ERROR) break; Parcel buffer; buffer.ipcSetDataReference( reinterpret_cast<const uint8_t*>(tr.data.ptr.buffer), tr.data_size, reinterpret_cast<const size_t*>(tr.data.ptr.offsets), tr.offsets_size/sizeof(size_t), freeBuffer, this); const pid_t origPid = mCallingPid; const uid_t origUid = mCallingUid; mCallingPid = tr.sender_pid; mCallingUid = tr.sender_euid; int curPrio = getpriority(PRIO_PROCESS, mMyThreadId); if (gDisableBackgroundScheduling) { if (curPrio > ANDROID_PRIORITY_NORMAL) { // We have inherited a reduced priority from the caller, but do not // want to run in that state in this process. The driver set our // priority already (though not our scheduling class), so bounce // it back to the default before invoking the transaction. setpriority(PRIO_PROCESS, mMyThreadId, ANDROID_PRIORITY_NORMAL); } } else { if (curPrio >= ANDROID_PRIORITY_BACKGROUND) { // We want to use the inherited priority from the caller. // Ensure this thread is in the background scheduling class, // since the driver won't modify scheduling classes for us. // The scheduling group is reset to default by the caller // once this method returns after the transaction is complete. set_sched_policy(mMyThreadId, SP_BACKGROUND); } } //ALOGI(">>>> TRANSACT from pid %d uid %d\n", mCallingPid, mCallingUid); Parcel reply; IF_LOG_TRANSACTIONS() { TextOutput::Bundle _b(alog); alog << "BR_TRANSACTION thr " << (void*)pthread_self() << " / obj " << tr.target.ptr << " / code " << TypeCode(tr.code) << ": " << indent << buffer << dedent << endl << "Data addr = " << reinterpret_cast<const uint8_t*>(tr.data.ptr.buffer) << ", offsets addr=" << reinterpret_cast<const size_t*>(tr.data.ptr.offsets) << endl; } if (tr.target.ptr) {
//这里用的是BBinder。 sp<BBinder> b((BBinder*)tr.cookie); const status_t error = b->transact(tr.code, buffer, &reply, tr.flags); if (error < NO_ERROR) reply.setError(error); } else { const status_t error = the_context_object->transact(tr.code, buffer, &reply, tr.flags); if (error < NO_ERROR) reply.setError(error); } //ALOGI("<<<< TRANSACT from pid %d restore pid %d uid %d\n", // mCallingPid, origPid, origUid); if ((tr.flags & TF_ONE_WAY) == 0) { LOG_ONEWAY("Sending reply to %d!", mCallingPid); sendReply(reply, 0); } else { LOG_ONEWAY("NOT sending reply to %d!", mCallingPid); } mCallingPid = origPid; mCallingUid = origUid; IF_LOG_TRANSACTIONS() { TextOutput::Bundle _b(alog); alog << "BC_REPLY thr " << (void*)pthread_self() << " / obj " << tr.target.ptr << ": " << indent << reply << dedent << endl; } } break; case BR_DEAD_BINDER: { BpBinder *proxy = (BpBinder*)mIn.readInt32(); proxy->sendObituary(); mOut.writeInt32(BC_DEAD_BINDER_DONE); mOut.writeInt32((int32_t)proxy); } break; case BR_CLEAR_DEATH_NOTIFICATION_DONE: { BpBinder *proxy = (BpBinder*)mIn.readInt32(); proxy->getWeakRefs()->decWeak(proxy); } break; case BR_FINISHED: result = TIMED_OUT; break; case BR_NOOP: break; case BR_SPAWN_LOOPER: mProcess->spawnPooledThread(false); break; default: printf("*** BAD COMMAND %d received from Binder driver\n", cmd); result = UNKNOWN_ERROR; break; } if (result != NO_ERROR) { mLastError = result; } return result; }
看看transact函数:
/* frameworks/native/libs/binder/Binder.cpp */ status_t BBinder::transact( uint32_t code, const Parcel& data, Parcel* reply, uint32_t flags) { data.setDataPosition(0); status_t err = NO_ERROR; switch (code) { case PING_TRANSACTION: reply->writeInt32(pingBinder()); break; default:
// 调用自己的OnTransact函数 err = onTransact(code, data, reply, flags); break; } if (reply != NULL) { reply->setDataPosition(0); } return err; }
BnMediaPlayerService从BBinder派生,所以会调用到它的onTransact函数。
终于水落石出,然我们看看BnMediaPlayerService的OnTransact函数:
/* frameworks/av/media/libmedia/IMediaPlayerService.cpp */ status_t BnMediaPlayerService::onTransact( uint32_t code, const Parcel& data, Parcel* reply, uint32_t flags) { switch (code) { case CREATE: { CHECK_INTERFACE(IMediaPlayerService, data, reply); sp<IMediaPlayerClient> client = interface_cast<IMediaPlayerClient>(data.readStrongBinder()); int audioSessionId = data.readInt32(); sp<IMediaPlayer> player = create(client, audioSessionId); reply->writeStrongBinder(player->asBinder()); return NO_ERROR; } break; case DECODE_URL: { CHECK_INTERFACE(IMediaPlayerService, data, reply); const char* url = data.readCString(); uint32_t sampleRate; int numChannels; audio_format_t format; sp<IMemory> player = decode(url, &sampleRate, &numChannels, &format); reply->writeInt32(sampleRate); reply->writeInt32(numChannels); reply->writeInt32((int32_t) format); reply->writeStrongBinder(player->asBinder()); return NO_ERROR; } break; case DECODE_FD: { CHECK_INTERFACE(IMediaPlayerService, data, reply); int fd = dup(data.readFileDescriptor()); int64_t offset = data.readInt64(); int64_t length = data.readInt64(); uint32_t sampleRate; int numChannels; audio_format_t format; sp<IMemory> player = decode(fd, offset, length, &sampleRate, &numChannels, &format); reply->writeInt32(sampleRate); reply->writeInt32(numChannels); reply->writeInt32((int32_t) format); reply->writeStrongBinder(player->asBinder()); return NO_ERROR; } break; case CREATE_MEDIA_RECORDER: { CHECK_INTERFACE(IMediaPlayerService, data, reply); sp<IMediaRecorder> recorder = createMediaRecorder(); reply->writeStrongBinder(recorder->asBinder()); return NO_ERROR; } break; case CREATE_METADATA_RETRIEVER: { CHECK_INTERFACE(IMediaPlayerService, data, reply); sp<IMediaMetadataRetriever> retriever = createMetadataRetriever(); reply->writeStrongBinder(retriever->asBinder()); return NO_ERROR; } break; case GET_OMX: { CHECK_INTERFACE(IMediaPlayerService, data, reply); sp<IOMX> omx = getOMX(); reply->writeStrongBinder(omx->asBinder()); return NO_ERROR; } break; case MAKE_CRYPTO: { CHECK_INTERFACE(IMediaPlayerService, data, reply); sp<ICrypto> crypto = makeCrypto(); reply->writeStrongBinder(crypto->asBinder()); return NO_ERROR; } break; case MAKE_DRM: { CHECK_INTERFACE(IMediaPlayerService, data, reply); sp<IDrm> drm = makeDrm(); reply->writeStrongBinder(drm->asBinder()); return NO_ERROR; } break; case MAKE_HDCP: { CHECK_INTERFACE(IMediaPlayerService, data, reply); bool createEncryptionModule = data.readInt32(); sp<IHDCP> hdcp = makeHDCP(createEncryptionModule); reply->writeStrongBinder(hdcp->asBinder()); return NO_ERROR; } break; case ADD_BATTERY_DATA: { CHECK_INTERFACE(IMediaPlayerService, data, reply); uint32_t params = data.readInt32(); addBatteryData(params); return NO_ERROR; } break; case PULL_BATTERY_DATA: { CHECK_INTERFACE(IMediaPlayerService, data, reply); pullBatteryData(reply); return NO_ERROR; } break; case LISTEN_FOR_REMOTE_DISPLAY: { CHECK_INTERFACE(IMediaPlayerService, data, reply); sp<IRemoteDisplayClient> client( interface_cast<IRemoteDisplayClient>(data.readStrongBinder())); String8 iface(data.readString8()); sp<IRemoteDisplay> display(listenForRemoteDisplay(client, iface)); reply->writeStrongBinder(display->asBinder()); return NO_ERROR; } break; case UPDATE_PROXY_CONFIG: { CHECK_INTERFACE(IMediaPlayerService, data, reply); const char *host = NULL; int32_t port = 0; const char *exclusionList = NULL; if (data.readInt32()) { host = data.readCString(); port = data.readInt32(); exclusionList = data.readCString(); } reply->writeInt32(updateProxyConfig(host, port, exclusionList)); return OK; } default: return BBinder::onTransact(code, data, reply, flags); } }
到这里,我们明白,BnXXX的OnTransact函数收取命令,然后派发到派生类的函数,由他们完成实际工作。
说明:
这里有点特殊,startThreadPool和joinThreadPool完后确实有两个线程,主线程和工作线程,而且都在做消息循环。为什么要这么做呢?他们参数isMain都是true。不知道google搞什么。难道是怕一个线程工作量太多,所以搞两个线程来工作?这种解释应该也是合理的。
网上有人测试过把最后一句屏蔽掉,也能正常工作。但是难道主线程提出了,程序还能不退出吗?这个...管它的,反正知道有两个线程在那处理就行了。
四、MediaPlayerClient后文不懂,不写了