copy from : http://gityuan.com/2016/10/03/binder_linktodeath/
基於Android 6.0源碼, 涉及相關源碼
frameworks/base/core/java/android/os/Binder.java
frameworks/base/core/jni/android_util_Binder.cpp
frameworks/native/libs/binder/BpBinder.cpp
一. 概述
死亡通知是為了讓Bp端(客戶端進程)進能知曉Bn端(服務端進程)的生死情況,當Bn端進程死亡后能通知到Bp端。
- 定義:AppDeathRecipient是繼承IBinder::DeathRecipient類,主要需要實現其binderDied()來進行死亡通告。
- 注冊:binder->linkToDeath(AppDeathRecipient)是為了將AppDeathRecipient死亡通知注冊到Binder上。
Bp端只需要覆寫binderDied()方法,實現一些后尾清除類的工作,則在Bn端死掉后,會回調binderDied()進行相應處理。
1.1 實例說明
public final class ActivityManagerService { private final boolean attachApplicationLocked(IApplicationThread thread, int pid) { ... //創建IBinder.DeathRecipient子類對象 AppDeathRecipient adr = new AppDeathRecipient(app, pid, thread); //建立binder死亡回調 thread.asBinder().linkToDeath(adr, 0); app.deathRecipient = adr; ... //取消binder死亡回調 app.unlinkDeathRecipient(); } private final class AppDeathRecipient implements IBinder.DeathRecipient { ... public void binderDied() { synchronized(ActivityManagerService.this) { appDiedLocked(mApp, mPid, mAppThread, true); } } } }
前面涉及到linkToDeath和unlinkToDeath方法,實現如下:
[-> Binder.java]
public class Binder implements IBinder { public void linkToDeath(DeathRecipient recipient, int flags) { } public boolean unlinkToDeath(DeathRecipient recipient, int flags) { return true; } } final class BinderProxy implements IBinder { public native void linkToDeath(DeathRecipient recipient, int flags) throws RemoteException; public native boolean unlinkToDeath(DeathRecipient recipient, int flags); }
可見,以上兩個方法:
- 當為Binder服務端,則相應的兩個方法實現為空,沒有實際功能;
- 當為BinderProxy代理端,則調用native方法來實現相應功能,這是真實的使用場景。
二. 上層注冊死亡通知
BinderProxy調用linkToDeath()方法是一個native方法, 通過jni進入如下方法:
2.1 linkToDeath
[-> android_util_Binder.cpp]
static void android_os_BinderProxy_linkToDeath(JNIEnv* env, jobject obj, jobject recipient, jint flags) { if (recipient == NULL) { jniThrowNullPointerException(env, NULL); return; } //獲取BinderProxy.mObject成員變量值, 即BpBinder對象 IBinder* target = (IBinder*)env->GetLongField(obj, gBinderProxyOffsets.mObject); ... //只有Binder代理對象才會進入該分支 if (!target->localBinder()) { DeathRecipientList* list = (DeathRecipientList*) env->GetLongField(obj, gBinderProxyOffsets.mOrgue); //創建JavaDeathRecipient對象[見小節2.1.1] sp<JavaDeathRecipient> jdr = new JavaDeathRecipient(env, recipient, list); //建立死亡通知[見小節2.2] status_t err = target->linkToDeath(jdr, NULL, flags); if (err != NO_ERROR) { //添加死亡通告失敗, 則從list移除引用[見小節2.1.3] jdr->clearReference(); signalExceptionForError(env, obj, err, true /*canThrowRemoteException*/); } } }
過程說明:
- 獲取DeathRecipientList: 其成員變量mList記錄該BinderProxy的JavaDeathRecipient列表信息;
- 一個BpBinder可以注冊多個死亡回調
- 創建JavaDeathRecipient: 繼承於IBinder::DeathRecipient
2.1.1 JavaDeathRecipient
[-> android_util_Binder.cpp]
class JavaDeathRecipient : public IBinder::DeathRecipient { public: JavaDeathRecipient(JNIEnv* env, jobject object, const sp<DeathRecipientList>& list) : mVM(jnienv_to_javavm(env)), mObject(env->NewGlobalRef(object)), mObjectWeak(NULL), mList(list) { //將當前對象sp添加到列表DeathRecipientList list->add(this); android_atomic_inc(&gNumDeathRefs); incRefsCreated(env); //[見小節2.1.2] } }
該方法主要功能:
- 通過env->NewGlobalRef(object),為recipient創建相應的全局引用,並保存到mObject成員變量;
- 將當前對象JavaDeathRecipient的強指針sp添加到DeathRecipientList;
2.1.2 incRefsCreated
[-> android_util_Binder.cpp]
static void incRefsCreated(JNIEnv* env) { int old = android_atomic_inc(&gNumRefsCreated); if (old == 2000) { android_atomic_and(0, &gNumRefsCreated); //觸發forceGc env->CallStaticVoidMethod(gBinderInternalOffsets.mClass, gBinderInternalOffsets.mForceGc); } }
該方法的主要是增加引用計數incRefsCreated,每計數增加2000則執行一次forceGc;
會觸發調用incRefsCreated()的場景有:
- JavaBBinder對象創建過程
- JavaDeathRecipient對象創建過程;
- javaObjectForIBinder()方法:將native層BpBinder對象轉換為Java層BinderProxy對象的過程;
2.1.3 clearReference
[-> android_util_Binder.cpp ::JavaDeathRecipient]
void clearReference() { sp<DeathRecipientList> list = mList.promote(); if (list != NULL) { list->remove(this); //從列表中移除引用 } }
清除引用,將JavaDeathRecipient從DeathRecipientList列表中移除.
2.2 linkToDeath
[-> BpBinder.cpp]
status_t BpBinder::linkToDeath(
const sp<DeathRecipient>& recipient, void* cookie, uint32_t flags) { Obituary ob; ob.recipient = recipient; //該對象為JavaDeathRecipient ob.cookie = cookie; // cookie=NULL ob.flags = flags; // flags=0 { AutoMutex _l(mLock); if (!mObitsSent) { //沒有執行過sendObituary,則進入該方法 if (!mObituaries) { mObituaries = new Vector<Obituary>; if (!mObituaries) { return NO_MEMORY; } getWeakRefs()->incWeak(this); IPCThreadState* self = IPCThreadState::self(); //[見小節2.3] self->requestDeathNotification(mHandle, this); //[見小節2.4] self->flushCommands(); } //將新創建的Obituary添加到mObituaries ssize_t res = mObituaries->add(ob); return res >= (ssize_t)NO_ERROR ? (status_t)NO_ERROR : res; } } return DEAD_OBJECT; }
2.2.1 DeathRecipient關系圖
Java層的BinderProxy.mOrgue指向DeathRecipientList,而DeathRecipientList記錄JavaDeathRecipient對象。
2.3 requestDeathNotification
[-> IPCThreadState.cpp]
status_t IPCThreadState::requestDeathNotification(int32_t handle, BpBinder* proxy) { mOut.writeInt32(BC_REQUEST_DEATH_NOTIFICATION); mOut.writeInt32((int32_t)handle); mOut.writePointer((uintptr_t)proxy); return NO_ERROR; }
進入Binder driver后, 直接調用后進入binder_thread_write, 處理BC_REQUEST_DEATH_NOTIFICATION命令
2.4 flushCommands
[-> IPCThreadState.cpp]
void IPCThreadState::flushCommands()
{
if (mProcess->mDriverFD <= 0) return; talkWithDriver(false); }
flushCommands就是把命令向驅動發出,此處參數為false,則不會阻塞等待讀。 向Kernel層的binder driver發送BC_REQUEST_DEATH_NOTIFICATION命令,經過ioctl執行到 binder_ioctl_write_read()方法。
三. Kernel層注冊通知
3.1 binder_ioctl_write_read
[-> kernel/drivers/android/binder.c]
static int binder_ioctl_write_read(struct file *filp, unsigned int cmd, unsigned long arg, struct binder_thread *thread) { int ret = 0; struct binder_proc *proc = filp->private_data; void __user *ubuf = (void __user *)arg; struct binder_write_read bwr; if (copy_from_user(&bwr, ubuf, sizeof(bwr))) { //把用戶空間數據ubuf拷貝到bwr ret = -EFAULT; goto out; } if (bwr.write_size > 0) { //此時寫緩存有數據【見小節3.2】 ret = binder_thread_write(proc, thread, bwr.write_buffer, bwr.write_size, &bwr.write_consumed); ... } if (bwr.read_size > 0) { //此時讀緩存沒有數據 ... } if (copy_to_user(ubuf, &bwr, sizeof(bwr))) { //將內核數據bwr拷貝到用戶空間ubuf ret = -EFAULT; goto out; } out: return ret; }
3.2 binder_thread_write
[-> kernel/drivers/android/binder.c]
static int binder_thread_write(struct binder_proc *proc, struct binder_thread *thread, binder_uintptr_t binder_buffer, size_t size, binder_size_t *consumed) { uint32_t cmd; //proc, thread都是指當前發起端進程的信息 struct binder_context *context = proc->context; void __user *buffer = (void __user *)(uintptr_t)binder_buffer; void __user *ptr = buffer + *consumed; void __user *end = buffer + size; while (ptr < end && thread->return_error == BR_OK) { get_user(cmd, (uint32_t __user *)ptr); //獲取BC_REQUEST_DEATH_NOTIFICATION ptr += sizeof(uint32_t); switch (cmd) { case BC_REQUEST_DEATH_NOTIFICATION:{ //注冊死亡通知 uint32_t target; void __user *cookie; struct binder_ref *ref; struct binder_ref_death *death; get_user(target, (uint32_t __user *)ptr); //獲取target ptr += sizeof(uint32_t); get_user(cookie, (void __user * __user *)ptr); //獲取BpBinder ptr += sizeof(void *); ref = binder_get_ref(proc, target); //拿到目標服務的binder_ref if (cmd == BC_REQUEST_DEATH_NOTIFICATION) { //native Bp可注冊多個,但Kernel只允許注冊一個死亡通知 if (ref->death) { break; } death = kzalloc(sizeof(*death), GFP_KERNEL); INIT_LIST_HEAD(&death->work.entry); death->cookie = cookie; //BpBinder指針 ref->death = death; //當目標binder服務所在進程已死,則直接發送死亡通知。這是非常規情況 if (ref->node->proc == NULL) { ref->death->work.type = BINDER_WORK_DEAD_BINDER; //當前線程為binder線程,則直接添加到當前線程的todo隊列. if (thread->looper & (BINDER_LOOPER_STATE_REGISTERED | BINDER_LOOPER_STATE_ENTERED)) { list_add_tail(&ref->death->work.entry, &thread->todo); } else { list_add_tail(&ref->death->work.entry, &proc->todo); wake_up_interruptible(&proc->wait); } } } else { ... } } break; case ...; } *consumed = ptr - buffer; } }
該方法在處理BC_REQUEST_DEATH_NOTIFICATION過程,正好遇到對端目標binder服務所在進程已死的情況, 向todo隊列增加BINDER_WORK_DEAD_BINDER事務,直接發送死亡通知,但這屬於非常規情況。
更常見的場景是binder服務所在進程死亡后,會調用binder_release方法, 然后調用binder_node_release.這個過程便會發出死亡通知的回調.
四. 觸發死亡通知
當Binder服務所在進程死亡后,會釋放進程相關的資源,Binder也是一種資源。 binder_open打開binder驅動/dev/binder,這是字符設備,獲取文件描述符。在進程結束的時候會有一個關閉文件系統的過程中會調用驅動close方法,該方法相對應的是release()方法。當binder的fd被釋放后,此處調用相應的方法是binder_release().
但並不是每個close系統調用都會觸發調用release()方法. 只有真正釋放設備數據結構才調用release(),內核維持一個文件結構被使用多少次的計數,即便是應用程序沒有明顯地關閉它打開的文件也適用: 內核在進程exit()時會釋放所有內存和關閉相應的文件資源, 通過使用close系統調用最終也會release binder.
4.1 release
[-> binder.c]
static const struct file_operations binder_fops = { .owner = THIS_MODULE, .poll = binder_poll, .unlocked_ioctl = binder_ioctl, .compat_ioctl = binder_ioctl, .mmap = binder_mmap, .open = binder_open, .flush = binder_flush, .release = binder_release, //對應於release的方法 };
4.2 binder_release
static int binder_release(struct inode *nodp, struct file *filp) { struct binder_proc *proc = filp->private_data; debugfs_remove(proc->debugfs_entry); //[見小節4.3] binder_defer_work(proc, BINDER_DEFERRED_RELEASE); return 0; }
4.3 binder_defer_work
static void binder_defer_work(struct binder_proc *proc, enum binder_deferred_state defer) { mutex_lock(&binder_deferred_lock); //獲取鎖 //添加BINDER_DEFERRED_RELEASE proc->deferred_work |= defer; if (hlist_unhashed(&proc->deferred_work_node)) { hlist_add_head(&proc->deferred_work_node, &binder_deferred_list); //向工作隊列添加binder_deferred_work [見小節4.4] queue_work(binder_deferred_workqueue, &binder_deferred_work); } mutex_unlock(&binder_deferred_lock); //釋放鎖 }
4.4 queue_work
//全局工作隊列 static struct workqueue_struct *binder_deferred_workqueue; static int __init binder_init(void) { int ret; //創建了名叫“binder”的工作隊列 binder_deferred_workqueue = create_singlethread_workqueue("binder"); if (!binder_deferred_workqueue) return -ENOMEM; ... } device_initcall(binder_init);
關於binder_deferred_work的定義:
static DECLARE_WORK(binder_deferred_work, binder_deferred_func); #define DECLARE_WORK(n, f) \ struct work_struct n = __WORK_INITIALIZER(n, f) #define __WORK_INITIALIZER(n, f) { \ .data = WORK_DATA_STATIC_INIT(), \ .entry = { &(n).entry, &(n).entry }, \ .func = (f), \ __WORK_INIT_LOCKDEP_MAP(#n, &(n)) \ }
在Binder設備驅動初始化的過程執行binder_init()方法中,調用 create_singlethread_workqueue(“binder”),創建了名叫“binder”的工作隊列(workqueue)。 workqueue是kernel提供的一種實現簡單而有效的內核線程機制,可延遲執行任務。
此處binder_deferred_work的func為binder_deferred_func,接下來看該方法。
4.5 binder_deferred_func
static void binder_deferred_func(struct work_struct *work) { struct binder_proc *proc; struct files_struct *files; int defer; do { mutex_lock(&binder_main_lock); //獲取binder_main_lock mutex_lock(&binder_deferred_lock); preempt_disable(); //禁止CPU搶占 if (!hlist_empty(&binder_deferred_list)) { proc = hlist_entry(binder_deferred_list.first, struct binder_proc, deferred_work_node); hlist_del_init(&proc->deferred_work_node); defer = proc->deferred_work; proc->deferred_work = 0; } else { proc = NULL; defer = 0; } mutex_unlock(&binder_deferred_lock); files = NULL; if (defer & BINDER_DEFERRED_PUT_FILES) { files = proc->files; if (files) proc->files = NULL; } if (defer & BINDER_DEFERRED_FLUSH) binder_deferred_flush(proc); if (defer & BINDER_DEFERRED_RELEASE) binder_deferred_release(proc); //[見小節4.6] mutex_unlock(&binder_main_lock); //釋放鎖 preempt_enable_no_resched(); if (files) put_files_struct(files); } while (proc); }
可見,binder_release最終調用的是binder_deferred_release; 同理,binder_flush最終調用的是binder_deferred_flush。
4.6 binder_deferred_release
static void binder_deferred_release(struct binder_proc *proc) { struct binder_transaction *t; struct rb_node *n; int threads, nodes, incoming_refs, outgoing_refs, buffers, active_transactions, page_count; hlist_del(&proc->proc_node); //刪除proc_node節點 if (binder_context_mgr_node && binder_context_mgr_node->proc == proc) { binder_context_mgr_node = NULL; } //釋放binder_thread[見小節4.6.1] threads = 0; active_transactions = 0; while ((n = rb_first(&proc->threads))) { struct binder_thread *thread; thread = rb_entry(n, struct binder_thread, rb_node); threads++; active_transactions += binder_free_thread(proc, thread); } //釋放binder_node [見小節4.6.2] nodes = 0; incoming_refs = 0; while ((n = rb_first(&proc->nodes))) { struct binder_node *node; node = rb_entry(n, struct binder_node, rb_node); nodes++; rb_erase(&node->rb_node, &proc->nodes); incoming_refs = binder_node_release(node, incoming_refs); } //釋放binder_ref [見小節4.6.3] outgoing_refs = 0; while ((n = rb_first(&proc->refs_by_desc))) { struct binder_ref *ref; ref = rb_entry(n, struct binder_ref, rb_node_desc); outgoing_refs++; binder_delete_ref(ref); } //釋放binder_work [見小節4.6.4] binder_release_work(&proc->todo); binder_release_work(&proc->delivered_death); buffers = 0; while ((n = rb_first(&proc->allocated_buffers))) { struct binder_buffer *buffer; buffer = rb_entry(n, struct binder_buffer, rb_node); t = buffer->transaction; if (t) { t->buffer = NULL; buffer->transaction = NULL; } //釋放binder_buf [見小節4.6.5] binder_free_buf(proc, buffer); buffers++; } binder_stats_deleted(BINDER_STAT_PROC); page_count = 0; if (proc->pages) { int i; for (i = 0; i < proc->buffer_size / PAGE_SIZE; i++) { void *page_addr; if (!proc->pages[i]) continue; page_addr = proc->buffer + i * PAGE_SIZE; unmap_kernel_range((unsigned long)page_addr, PAGE_SIZE); __free_page(proc->pages[i]); page_count++; } kfree(proc->pages); vfree(proc->buffer); } put_task_struct(proc->tsk); kfree(proc); }
此處proc是來自Bn端的binder_proc
4.6.1 binder_free_thread
static int binder_free_thread(struct binder_proc *proc, struct binder_thread *thread) { struct binder_transaction *t; struct binder_transaction *send_reply = NULL; int active_transactions = 0; rb_erase(&thread->rb_node, &proc->threads); t = thread->transaction_stack; if (t && t->to_thread == thread) send_reply = t; //服務端 while (t) { active_transactions++; if (t->to_thread == thread) { t->to_proc = NULL; t->to_thread = NULL; if (t->buffer) { t->buffer->transaction = NULL; t->buffer = NULL; } t = t->to_parent; } else if (t->from == thread) { t->from = NULL; t = t->from_parent; } } //將發起方線程的return_error值設置為BR_DEAD_REPLY【見小節4.6.4.1】 if (send_reply) binder_send_failed_reply(send_reply, BR_DEAD_REPLY); //[見小節4.6.4] binder_release_work(&thread->todo); kfree(thread); binder_stats_deleted(BINDER_STAT_THREAD); return active_transactions; }
4.6.2 binder_node_release
static int binder_node_release(struct binder_node *node, int refs) { struct binder_ref *ref; int death = 0; list_del_init(&node->work.entry); //[見小節4.6.4] binder_release_work(&node->async_todo); if (hlist_empty(&node->refs)) { kfree(node); //引用為空,則直接刪除節點 binder_stats_deleted(BINDER_STAT_NODE); return refs; } node->proc = NULL; node->local_strong_refs = 0; node->local_weak_refs = 0; hlist_add_head(&node->dead_node, &binder_dead_nodes); hlist_for_each_entry(ref, &node->refs, node_entry) { refs++; if (!ref->death) continue; death++; if (list_empty(&ref->death->work.entry)) { //添加BINDER_WORK_DEAD_BINDER事務到todo隊列 [見小節5.1] ref->death->work.type = BINDER_WORK_DEAD_BINDER; list_add_tail(&ref->death->work.entry, &ref->proc->todo); wake_up_interruptible(&ref->proc->wait); } } return refs; }
該方法會遍歷該binder_node所有的binder_ref, 當存在binder死亡通知,則向相應的binder_ref 所在進程的todo隊列添加BINDER_WORK_DEAD_BINDER事務並喚醒處於proc->wait的binder線程,下一步行動見[見小節5.1]。
4.6.3 binder_delete_ref
static void binder_delete_ref(struct binder_ref *ref) { rb_erase(&ref->rb_node_desc, &ref->proc->refs_by_desc); rb_erase(&ref->rb_node_node, &ref->proc->refs_by_node); if (ref->strong) binder_dec_node(ref->node, 1, 1); hlist_del(&ref->node_entry); binder_dec_node(ref->node, 0, 1); if (ref->death) { list_del(&ref->death->work.entry); kfree(ref->death); binder_stats_deleted(BINDER_STAT_DEATH); } kfree(ref); binder_stats_deleted(BINDER_STAT_REF); }
4.6.4 binder_release_work
static void binder_release_work(struct list_head *list) { struct binder_work *w; while (!list_empty(list)) { w = list_first_entry(list, struct binder_work, entry); list_del_init(&w->entry); //刪除binder_work switch (w->type) { case BINDER_WORK_TRANSACTION: { struct binder_transaction *t; t = container_of(w, struct binder_transaction, work); if (t->buffer->target_node && !(t->flags & TF_ONE_WAY)) { //發送failed回復【見小節4.6.4.1】 binder_send_failed_reply(t, BR_DEAD_REPLY); } else { t->buffer->transaction = NULL; kfree(t); binder_stats_deleted(BINDER_STAT_TRANSACTION); } } break; case BINDER_WORK_TRANSACTION_COMPLETE: { kfree(w); binder_stats_deleted(BINDER_STAT_TRANSACTION_COMPLETE); } break; case BINDER_WORK_DEAD_BINDER_AND_CLEAR: case BINDER_WORK_CLEAR_DEATH_NOTIFICATION: { struct binder_ref_death *death; death = container_of(w, struct binder_ref_death, work); kfree(death); binder_stats_deleted(BINDER_STAT_DEATH); } break; default: break; } } }
4.6.4.1 binder_send_failed_reply
static void binder_send_failed_reply(struct binder_transaction *t, uint32_t error_code) { struct binder_thread *target_thread; struct binder_transaction *next; while (1) { target_thread = t->from; if (target_thread) { if (target_thread->return_error != BR_OK && target_thread->return_error2 == BR_OK) { target_thread->return_error2 = target_thread->return_error; target_thread->return_error = BR_OK; } if (target_thread->return_error == BR_OK) { binder_pop_transaction(target_thread, t); //設置錯誤的返回碼,並喚醒等待線程 target_thread->return_error = error_code; wake_up_interruptible(&target_thread->wait); } return; } next = t->from_parent; binder_pop_transaction(target_thread, t); if (next == NULL) { return; } t = next; } }
4.6.5 binder_free_buf
static void binder_free_buf(struct binder_proc *proc, struct binder_buffer *buffer) { size_t size, buffer_size; buffer_size = binder_buffer_size(proc, buffer); size = ALIGN(buffer->data_size, sizeof(void *)) + ALIGN(buffer->offsets_size, sizeof(void *)); if (buffer->async_transaction) { proc->free_async_space += size + sizeof(struct binder_buffer); } binder_update_page_range(proc, 0, (void *)PAGE_ALIGN((uintptr_t)buffer->data), (void *)(((uintptr_t)buffer->data + buffer_size) & PAGE_MASK), NULL); rb_erase(&buffer->rb_node, &proc->allocated_buffers); buffer->free = 1; if (!list_is_last(&buffer->entry, &proc->buffers)) { struct binder_buffer *next = list_entry(buffer->entry.next, struct binder_buffer, entry); if (next->free) { rb_erase(&next->rb_node, &proc->free_buffers); binder_delete_free_buffer(proc, next); } } if (proc->buffers.next != &buffer->entry) { struct binder_buffer *prev = list_entry(buffer->entry.prev, struct binder_buffer, entry); if (prev->free) { binder_delete_free_buffer(proc, buffer); rb_erase(&prev->rb_node, &proc->free_buffers); buffer = prev; } } binder_insert_free_buffer(proc, buffer); }
4.6.6 小結
binder_deferred_release的主要工作有:
- binder_free_thread: proc->threads所有線程
- binder_send_failed_reply(send_reply, BR_DEAD_REPLY):將發起方線程的return_error值設置為BR_DEAD_REPLY,讓其直接返回;
- binder_node_release: proc->nodes所有節點
- binder_release_work(&node->async_todo)
- node->refs的所有死亡回調
- binder_delete_ref: proc->refs_by_desc所有引用
- 清除引用
- binder_release_work: proc->todo, proc->delivered_death
- binder_send_failed_reply(t, BR_DEAD_REPLY)
- binder_free_buf: proc->allocated_buffers所有已分配buffer
- 釋放已分配的buffer
- __free_page: proc->pages所有物理內存頁
不論是binder線程正在處理的事務,還是位於進程todo隊列的事務,當進程被殺后,則會立馬通知請求發起方來結束請求。
五. 處理死亡通知
前面[小節4.6.2] binder_node_release的過程會向BINDER_WORK_DEAD_BINDER事務並喚醒處於proc->wait的binder線程。
5.1 binder_thread_read
static int binder_thread_read(struct binder_proc *proc, struct binder_thread *thread, binder_uintptr_t binder_buffer, size_t size, binder_size_t *consumed, int non_block) ... //喚醒等待中的binder線程 wait_event_freezable_exclusive(proc->wait, binder_has_proc_work(proc, thread)); binder_lock(__func__); //加鎖 if (wait_for_proc_work) proc->ready_threads--; //空閑的binder線程減1 thread->looper &= ~BINDER_LOOPER_STATE_WAITING; while (1) { uint32_t cmd; struct binder_transaction_data tr; struct binder_work *w; struct binder_transaction *t = NULL; //從todo隊列拿出前面放入的binder_work, 此時type為BINDER_WORK_DEAD_BINDER if (!list_empty(&thread->todo)) { w = list_first_entry(&thread->todo, struct binder_work, entry); } else if (!list_empty(&proc->todo) && wait_for_proc_work) { w = list_first_entry(&proc->todo, struct binder_work, entry); } switch (w->type) { case BINDER_WORK_DEAD_BINDER: { struct binder_ref_death *death; uint32_t cmd; death = container_of(w, struct binder_ref_death, work); if (w->type == BINDER_WORK_CLEAR_DEATH_NOTIFICATION) ... else cmd = BR_DEAD_BINDER; //進入此分支 put_user(cmd, (uint32_t __user *)ptr);//拷貝到用戶空間[見小節5.2] ptr += sizeof(uint32_t); //此處的cookie是前面傳遞的BpBinder put_user(death->cookie, (binder_uintptr_t __user *)ptr); ptr += sizeof(binder_uintptr_t); if (w->type == BINDER_WORK_CLEAR_DEATH_NOTIFICATION) { ... } else //把該work加入到delivered_death隊列 list_move(&w->entry, &proc->delivered_death); if (cmd == BR_DEAD_BINDER) goto done; } break; } } ... return 0; }
將命令BR_DEAD_BINDER寫到用戶空間,此時用戶空間執行過程:
5.2 IPC.getAndExecuteCommand
status_t IPCThreadState::getAndExecuteCommand()
{
status_t result;
int32_t cmd;
result = talkWithDriver(); //該Binder Driver進行交互 if (result >= NO_ERROR) { size_t IN = mIn.dataAvail(); if (IN < sizeof(int32_t)) return result; cmd = mIn.readInt32(); //讀取命令 pthread_mutex_lock(&mProcess->mThreadCountLock); mProcess->mExecutingThreadsCount++; pthread_mutex_unlock(&mProcess->mThreadCountLock); result = executeCommand(cmd); //【見小節5.3】 pthread_mutex_lock(&mProcess->mThreadCountLock); mProcess->mExecutingThreadsCount--; pthread_cond_broadcast(&mProcess->mThreadCountDecrement); pthread_mutex_unlock(&mProcess->mThreadCountLock); set_sched_policy(mMyThreadId, SP_FOREGROUND); } return result; }
5.3 IPC.executeCommand
status_t IPCThreadState::executeCommand(int32_t cmd) { BBinder* obj; RefBase::weakref_type* refs; status_t result = NO_ERROR; switch ((uint32_t)cmd) { case BR_DEAD_BINDER: { BpBinder *proxy = (BpBinder*)mIn.readPointer(); proxy->sendObituary(); //[見小節5.4] mOut.writeInt32(BC_DEAD_BINDER_DONE); mOut.writePointer((uintptr_t)proxy); } break; ... } ... return result; }
同一個bp端即便注冊多次死亡通知,但只會發送一次死亡回調。
5.4 Bp.sendObituary
void BpBinder::sendObituary()
{
mAlive = 0; if (mObitsSent) return; mLock.lock(); Vector<Obituary>* obits = mObituaries; if(obits != NULL) { IPCThreadState* self = IPCThreadState::self(); //清空死亡通知[見小節6.2] self->clearDeathNotification(mHandle, this); self->flushCommands(); mObituaries = NULL; } mObitsSent = 1; mLock.unlock(); if (obits != NULL) { const size_t N = obits->size(); for (size_t i=0; i<N; i++) { //發送死亡通知 [見小節5.5] reportOneDeath(obits->itemAt(i)); } delete obits; } }
5.5 reportOneDeath
void BpBinder::reportOneDeath(const Obituary& obit) { //將弱引用提升到sp sp<DeathRecipient> recipient = obit.recipient.promote(); if (recipient == NULL) return; //回調死亡通知的方法 recipient->binderDied(this); }
本文開頭的實例傳遞的是AppDeathRecipient,那么回調如下方法。
5.6 binderDied
private final class AppDeathRecipient implements IBinder.DeathRecipient { ... public void binderDied() { synchronized(ActivityManagerService.this) { appDiedLocked(mApp, mPid, mAppThread, true); } } }
六. unlinkToDeath
6.1 unlinkToDeath
status_t BpBinder::unlinkToDeath(
const wp<DeathRecipient>& recipient, void* cookie, uint32_t flags, wp<DeathRecipient>* outRecipient) { AutoMutex _l(mLock); if (mObitsSent) { return DEAD_OBJECT; } const size_t N = mObituaries ? mObituaries->size() : 0; for (size_t i=0; i<N; i++) { const Obituary& obit = mObituaries->itemAt(i); if ((obit.recipient == recipient || (recipient == NULL && obit.cookie == cookie)) && obit.flags == flags) { if (outRecipient != NULL) { *outRecipient = mObituaries->itemAt(i).recipient; } mObituaries->removeAt(i); //移除死亡通知 if (mObituaries->size() == 0) { //清理死亡通知 self->clearDeathNotification(mHandle, this); self->flushCommands(); delete mObituaries; mObituaries = NULL; } return NO_ERROR; } } return NAME_NOT_FOUND; }
6.2 clearDeathNotification
status_t IPCThreadState::clearDeathNotification(int32_t handle, BpBinder* proxy) { mOut.writeInt32(BC_CLEAR_DEATH_NOTIFICATION); mOut.writeInt32((int32_t)handle); mOut.writePointer((uintptr_t)proxy); return NO_ERROR; }
寫入BC_CLEAR_DEATH_NOTIFICATION命令,再經過flushCommands(),則進入Kernel層。
6.3 Kernel層取消死亡通知
6.3.1 binder_thread_write
static int binder_thread_write(struct binder_proc *proc, struct binder_thread *thread, binder_uintptr_t binder_buffer, size_t size, binder_size_t *consumed) { uint32_t cmd; //proc, thread都是指當前發起端進程的信息 struct binder_context *context = proc->context; void __user *buffer = (void __user *)(uintptr_t)binder_buffer; void __user *ptr = buffer + *consumed; void __user *end = buffer + size; while (ptr < end && thread->return_error == BR_OK) { get_user(cmd, (uint32_t __user *)ptr); //獲取BC_CLEAR_DEATH_NOTIFICATION ptr += sizeof(uint32_t); switch (cmd) { case BC_REQUEST_DEATH_NOTIFICATION: case BC_CLEAR_DEATH_NOTIFICATION: { //清除死亡通知 uint32_t target; void __user *cookie; struct binder_ref *ref; struct binder_ref_death *death; get_user(target, (uint32_t __user *)ptr); //獲取target ptr += sizeof(uint32_t); get_user(cookie, (void __user * __user *)ptr); ptr += sizeof(void *); ref = binder_get_ref(proc, target); //拿到目標服務的binder_ref if (cmd == BC_REQUEST_DEATH_NOTIFICATION) { ... } else { if (ref->death == NULL) { break; } death = ref->death; if (death->cookie != cookie) { break; //比較是否同一個BpBinder } ref->death = NULL; //設置死亡通知為NULL if (list_empty(&death->work.entry)) { //添加BINDER_WORK_CLEAR_DEATH_NOTIFICATION事務 death->work.type = BINDER_WORK_CLEAR_DEATH_NOTIFICATION; if (thread->looper & (BINDER_LOOPER_STATE_REGISTERED | BINDER_LOOPER_STATE_ENTERED)) { list_add_tail(&death->work.entry, &thread->todo); } else { list_add_tail(&death->work.entry, &proc->todo); wake_up_interruptible(&proc->wait); } } else { death->work.type = BINDER_WORK_DEAD_BINDER_AND_CLEAR; } } } break; case ...; } } }
添加BINDER_WORK_CLEAR_DEATH_NOTIFICATION事務
6.3.2 binder_thread_read
static int binder_thread_read(struct binder_proc *proc, struct binder_thread *thread, binder_uintptr_t binder_buffer, size_t size, binder_size_t *consumed, int non_block) ... //喚醒等待中的binder線程 wait_event_freezable_exclusive(proc->wait, binder_has_proc_work(proc, thread)); binder_lock(__func__); //加鎖 if (wait_for_proc_work) proc->ready_threads--; //空閑的binder線程減1 thread->looper &= ~BINDER_LOOPER_STATE_WAITING; while (1) { uint32_t cmd; struct binder_transaction_data tr; struct binder_work *w; struct binder_transaction *t = NULL; //從todo隊列拿出前面放入的binder_work, 此時type為BINDER_WORK_DEAD_BINDER if (!list_empty(&thread->todo)) { w = list_first_entry(&thread->todo, struct binder_work, entry); } else if (!list_empty(&proc->todo) && wait_for_proc_work) { w = list_first_entry(&proc->todo, struct binder_work, entry); } switch (w->type) { case BINDER_WORK_DEAD_BINDER: case BINDER_WORK_DEAD_BINDER_AND_CLEAR: case BINDER_WORK_CLEAR_DEATH_NOTIFICATION: { struct binder_ref_death *death; uint32_t cmd; death = container_of(w, struct binder_ref_death, work); if (w->type == BINDER_WORK_CLEAR_DEATH_NOTIFICATION) cmd = BR_CLEAR_DEATH_NOTIFICATION_DONE; //清除完成 ... if (w->type == BINDER_WORK_CLEAR_DEATH_NOTIFICATION) { list_del(&w->entry); //清除死亡通知的work隊列 kfree(death); binder_stats_deleted(BINDER_STAT_DEATH); } ... if (cmd == BR_DEAD_BINDER) goto done; } break; } } ... return 0; }
需要再回到用戶空間,查看BR_CLEAR_DEATH_NOTIFICATION_DONE處理過程
6.4 IPC.executeCommand
status_t IPCThreadState::executeCommand(int32_t cmd) { BBinder* obj; RefBase::weakref_type* refs; status_t result = NO_ERROR; switch ((uint32_t)cmd) { case BR_CLEAR_DEATH_NOTIFICATION_DONE: { BpBinder *proxy = (BpBinder*)mIn.readPointer(); //減少弱引用 proxy->getWeakRefs()->decWeak(proxy); } break; ... } ... return result; }
七. 結論
對於Binder IPC進程都會打開/dev/binder文件,當進程異常退出時,Binder驅動會保證釋放將要退出的進程中沒有正常關閉的/dev/binder文件,實現機制是binder驅動通過調用/dev/binder文件所對應的release回調函數,執行清理工作,並且檢查BBinder是否有注冊死亡通知,當發現存在死亡通知時,那么就向其對應的BpBinder端發送死亡通知消息。
死亡回調DeathRecipient只有Bp才能正確使用,因為DeathRecipient用於監控Bn端掛掉的情況, 如果Bn建立跟自己的死亡通知,自己進程都掛了,也就無法通知。
每個BpBinder都有一個記錄DeathRecipient列表的對象DeathRecipientList。
7.1 流程圖
圖解:點擊查看大圖
linkToDeath過程
- requestDeathNotification過程向驅動傳遞的命令BC_REQUEST_DEATH_NOTIFICATION,參數有mHandle和BpBinder對象;
- binder_thread_write()過程,同一個BpBinder可以注冊多個死亡回調,但Kernel只允許注冊一次死亡通知。
- 注冊死亡回調的過程,實質就是向binder_ref結構體添加binder_ref_death指針, binder_ref_death的cookie記錄BpBinder指針。
unlinkToDeath過程
- unlinkToDeath只有當該BpBinder的所有mObituaries都被移除,才會向驅動層執行清除死亡通知的動作, 否則只是從native層移除某個recipient。
- clearDeathNotification過程向驅動傳遞BC_CLEAR_DEATH_NOTIFICATION,參數有mHandle和BpBinder對象;
- binder_thread_write()過程,將BINDER_WORK_CLEAR_DEATH_NOTIFICATION事務添加當前當前進程/線程的todo隊列
觸發死亡回調
- 服務實體進程:binder_release過程會執行binder_node_release(),loop該binder_node下所有的ref->death對象。 當存在,則將BINDER_WORK_DEAD_BINDER事務添加ref->proc->todo(即ref所在進程的todo隊列)
- 引用所在進程:執行binder_thread_read()過程,向用戶空間寫入BR_DEAD_BINDER,並觸發死亡回調。
- 發送死亡通知sendObituary