1.什么是RPC(遠程過程調用)
Binder系統的目的是實現遠程過程調用(RPC),即進程A去調用進程B的某個函數,它是在進程間通信(IPC)的基礎上實現的。RPC的一個應用場景如下:
A進程想去打開LED,它會去調用led_open,然后調用led_ctl,但是如果A進程並沒有權限去打開驅動程序呢?
假設此時有一個進程B由權限去操作LED驅動程序,那么進程A可以通過如下方式來操作LED驅動:
①封裝數據,即A進程首先把想要調用的B進程的某個函數的(事先約定好的)代號等信息封裝成數據包
②A進程把封裝好了的數據包通過IPC(進程間通信)發送給B進程
③B取出數據之后,通過從數據包里解析出來的函數的代號來調用它自己相應的led_open或led_ctl函數
整個過程的結果好像A程序直接來操縱LED一樣,這就是所謂的RPC。整個過程涉及到了IPC(進程間通信)的三大要素,源、目的和數據。在這個例子里面,源就是進程A,目的是進程B,數據實際上就是一個雙方約定好了數據格式的buffer。
2.Binder系統實現的RPC
Binder系統采用的是CS架構,提供服務的進程稱為server進程,訪問服務的進程稱為client進程,server進程和client進程的通信需要依靠內核中的Binder驅動來進行。同時Binder系統提供了一個上下文的管理者servicemanager, server進程可以向servicemanager注冊服務,然后client進程可以通過向servicemanager查詢服務來獲取server進程注冊的服務。
回到上面的例子,A進程想操作LED,它可以通過將B進程的某個函數的(事先約定好的)代號通過IPC發給B進程,通過B進程來間接的操作LED,但是如果A進程不知道可以通過哪個進程來間接的操作LED呢,它應該將封裝好了的數據包發送給哪個進程呢?這就引入了Binder系統的大管家servicemanager。首先B進程向servicemanager注冊LED服務,然后我們的A進程就可以通過向servicemanager查詢LED服務,就會得到一個handle,這個handle就是指向進程B的,這樣進程A就知道把數據包(約定好數據格式的buffer)發送給哪個進程就可以間接的操作LED了。在這個例子中進程B就是server進程,進程A是client進程。
小小的總結一下,在 Binder系統中主要涉及到4個東西,一個是我們的A進程也就是client進程,一個是B進程也就是我們的server進程。client進程怎么知道要向哪一個server進程發送數據呢,中間就引入了Binder系統的大管家servicemanager。client進程、server進程和servicemanager之間的通信是建立在內核binder驅動的基礎上的,它們四個的關系如下圖所示
3.Binder系統的簡單應用(基於Android內核,拋開Android系統框架)
在Android源碼里面有一些C語言寫的binder應用程序
frameworks/native/cmds/servicemanager/bctest.c frameworks/native/cmds/servicemanager/binder.c frameworks/native/cmds/servicemanager/binder.h frameworks/native/cmds/servicemanager/service_manager.c
我們可以參照這些程序,基於Android內核,在Linux上實現一個Binder RPC的程序來理解使用Binder實現進程間通信的整個函數調用過程。
我們首先把android源碼frameworks/native/cmds/servicemanager目錄下的內容拷貝到我們自己的工程中,然后基於bctest.c來實現我們的server和client程序,因為我們是脫離Android系統來實現的,所以還需要將依賴的頭文件拷貝到工程中,然后對service_manager.c和binder.c做一些修改,去掉一些不必要的內容。最后我們還需要寫一個Makefile文件來構建整個工程,工程結構如下圖所示。
3.1.Server進程
首先實現Server程序,它實現兩個函數,sayhello和sayhello_to,並通過binder系統將向ServiceManager注冊服務,然后循環的從binder驅動讀取client進程發過來請求數據,並且通過這些請求數據調用自己相應的sayhello和sayhello_to函數。整個過程如下圖所示。
接着我們就來分析以下具體的代碼
/*test_server.h*/ #ifndef _TEST_SERVER_H #define _TEST_SERVER_H /*事先約定好的Server進程的相應函數的代號*/ #define HELLO_SVR_CMD_SAYHELLO 0 #define HELLO_SVR_CMD_SAYHELLO_TO 1 #endif // _TEST_SERVER_H
/*test_server.c*/ /* Copyright 2008 The Android Open Source Project */ #include <stdio.h> #include <stdlib.h> #include <errno.h> #include <linux/types.h> #include<stdbool.h> #include <string.h> #include <private/android_filesystem_config.h> #include "binder.h" #include "test_server.h" int svcmgr_publish(struct binder_state *bs, uint32_t target, const char *name, void *ptr) { int status; unsigned iodata[512/4]; struct binder_io msg, reply; bio_init(&msg, iodata, sizeof(iodata), 4); bio_put_uint32(&msg, 0); // strict mode header bio_put_string16_x(&msg, SVC_MGR_NAME); bio_put_string16_x(&msg, name); bio_put_obj(&msg, ptr); /*遠程調用ServiceManager的do_add_service函數*/ if (binder_call(bs, &msg, &reply, target, SVC_MGR_ADD_SERVICE)) return -1; status = bio_get_uint32(&reply); binder_done(bs, &msg, &reply); return status; } void sayhello(void) { static int cnt = 0; fprintf(stderr, "say hello : %d\n", cnt++); } int sayhello_to(char *name) { static int cnt = 0; fprintf(stderr, "say hello to %s : %d\n", name, cnt++); return cnt; } int hello_service_handler(struct binder_state *bs, struct binder_transaction_data *txn, struct binder_io *msg, struct binder_io *reply) { /* 根據txn->code知道要調用哪一個函數 * 如果需要參數, 可以從msg取出 * 如果要返回結果, 可以把結果放入reply */ /* sayhello * sayhello_to */ uint16_t *s; char name[512]; size_t len; uint32_t handle; uint32_t strict_policy; int i; // Equivalent to Parcel::enforceInterface(), reading the RPC // header with the strict mode policy mask and the interface name. // Note that we ignore the strict_policy and don't propagate it // further (since we do no outbound RPCs anyway). strict_policy = bio_get_uint32(msg); switch(txn->code) { case HELLO_SVR_CMD_SAYHELLO: sayhello(); return 0; case HELLO_SVR_CMD_SAYHELLO_TO: /* 從msg里取出字符串 */ s = bio_get_string16(msg, &len); if (s == NULL) { return -1; } for (i = 0; i < len; i++) name[i] = s[i]; name[i] = '\0'; /* 處理 */ i = sayhello_to(name); /* 把結果放入reply */ bio_put_uint32(reply, i); break; default: fprintf(stderr, "unknown code %d\n", txn->code); return -1; } return 0; } int main(int argc, char **argv) { int fd; struct binder_state *bs; uint32_t svcmgr = BINDER_SERVICE_MANAGER; uint32_t handle; int ret; /*打開並映射binder驅動*/ bs = binder_open(128*1024); if (!bs) { fprintf(stderr, "failed to open binder driver\n"); return -1; } /* 向ServiceManager注冊服務 */ ret = svcmgr_publish(bs, svcmgr, "hello", (void *)123); if (ret) { fprintf(stderr, "failed to publish hello service\n"); return -1; } ret = svcmgr_publish(bs, svcmgr, "goodbye", (void *)124); if (ret) { fprintf(stderr, "failed to publish goodbye service\n"); } #if 0 while (1) { /* read data */ /* parse data, and process */ /* reply */ } #endif /*通過我們傳入的hello_service_handler循環處理從binder驅動讀出的數據*/ binder_loop(bs, hello_service_handler); return 0; }
接着我們來分析一下這個binder_loop函數,它主要實現了3個功能
1.讀數據
2.解析並處理數據
3.回復
void binder_loop(struct binder_state *bs, binder_handler func) { int res; struct binder_write_read bwr; uint32_t readbuf[32]; //bwr.write_size = 0 表明下面的ioctl不會發起寫操作,只不過發起讀操作 bwr.write_size = 0; bwr.write_consumed = 0; bwr.write_buffer = 0; readbuf[0] = BC_ENTER_LOOPER; binder_write(bs, readbuf, sizeof(uint32_t)); for (;;) { bwr.read_size = sizeof(readbuf); bwr.read_consumed = 0; bwr.read_buffer = (uintptr_t) readbuf; /*通過ioctl從binder驅動中讀數據*/ res = ioctl(bs->fd, BINDER_WRITE_READ, &bwr); if (res < 0) { ALOGE("binder_loop: ioctl failed (%s)\n", strerror(errno)); break; } //讀到數據之后調用binder_parse解析數據,如果傳入func參數還會處理數據 res = binder_parse(bs, 0, (uintptr_t) readbuf, bwr.read_consumed, func); if (res == 0) { ALOGE("binder_loop: unexpected reply?!\n"); break; } if (res < 0) { ALOGE("binder_loop: io error %d %s\n", res, strerror(errno)); break; } } }
看一下我們是怎么處理數據的,注意我們傳入的binder_handler這個參數,它是一個函數指針
int binder_parse(struct binder_state *bs, struct binder_io *bio, uintptr_t ptr, size_t size, binder_handler func) { int r = 1; uintptr_t end = ptr + (uintptr_t) size; while (ptr < end) { uint32_t cmd = *(uint32_t *) ptr; ptr += sizeof(uint32_t); #if TRACE fprintf(stderr,"%s:\n", cmd_name(cmd)); #endif switch(cmd) { case BR_NOOP: break; case BR_TRANSACTION_COMPLETE: break; case BR_INCREFS: case BR_ACQUIRE: case BR_RELEASE: case BR_DECREFS: #if TRACE fprintf(stderr," %p, %p\n", (void *)ptr, (void *)(ptr + sizeof(void *))); #endif ptr += sizeof(struct binder_ptr_cookie); break; //我們收到的命令是BR_TRANSACTION case BR_TRANSACTION: { struct binder_transaction_data *txn = (struct binder_transaction_data *) ptr; if ((end - ptr) < sizeof(*txn)) { ALOGE("parse: txn too small!\n"); return -1; } binder_dump_txn(txn); if (func) { unsigned rdata[256/4]; struct binder_io msg; struct binder_io reply; int res; //接收到數據之后,構造一個binder_io bio_init(&reply, rdata, sizeof(rdata), 4); bio_init_from_txn(&msg, txn); //調用我們的處理函數 res = func(bs, txn, &msg, &reply); //處理完之后發送一個reply binder_send_reply(bs, &reply, txn->data.ptr.buffer, res); } ptr += sizeof(*txn); break; } case BR_REPLY: { struct binder_transaction_data *txn = (struct binder_transaction_data *) ptr; if ((end - ptr) < sizeof(*txn)) { ALOGE("parse: reply too small!\n"); return -1; } binder_dump_txn(txn); if (bio) { bio_init_from_txn(bio, txn); bio = 0; } else { /* todo FREE BUFFER */ } ptr += sizeof(*txn); r = 0; break; } case BR_DEAD_BINDER: { struct binder_death *death = (struct binder_death *)(uintptr_t) *(binder_uintptr_t *)ptr; ptr += sizeof(binder_uintptr_t); death->func(bs, death->ptr); break; } case BR_FAILED_REPLY: r = -1; break; case BR_DEAD_REPLY: r = -1; break; default: ALOGE("parse: OOPS %d\n", cmd); return -1; } } return r; }
3.2.Client進程
Client進程和Server進程的大致流程差不多,它首先打開和映射binder驅動,然后向ServiceManager查詢服務,最后通過查詢服務時ServiceManager返回的handle遠程調用Server進程的函數,主要流程如下所示。
下面我們就分析一下具體的源碼
/*test_client.c*/ /* Copyright 2008 The Android Open Source Project */ #include <stdio.h> #include <stdlib.h> #include <errno.h> #include <linux/types.h> #include<stdbool.h> #include <string.h> #include <private/android_filesystem_config.h> #include "binder.h" #include "test_server.h" uint32_t svcmgr_lookup(struct binder_state *bs, uint32_t target, const char *name) { uint32_t handle; unsigned iodata[512/4]; struct binder_io msg, reply; bio_init(&msg, iodata, sizeof(iodata), 4); bio_put_uint32(&msg, 0); // strict mode header bio_put_string16_x(&msg, SVC_MGR_NAME); bio_put_string16_x(&msg, name); /*遠程調用ServiceManager的do_find_service函數*/ if (binder_call(bs, &msg, &reply, target, SVC_MGR_CHECK_SERVICE)) return 0; handle = bio_get_ref(&reply); if (handle) binder_acquire(bs, handle); binder_done(bs, &msg, &reply); return handle; } struct binder_state *g_bs; uint32_t g_handle; void sayhello(void) { unsigned iodata[512/4]; struct binder_io msg, reply; /* 構造binder_io */ bio_init(&msg, iodata, sizeof(iodata), 4); bio_put_uint32(&msg, 0); // strict mode header /* 放入參數 */ /* 調用binder_call遠程調用Server的sayhello函數*/ if (binder_call(g_bs, &msg, &reply, g_handle, HELLO_SVR_CMD_SAYHELLO)) return ; /* 從reply中解析出返回值 */ binder_done(g_bs, &msg, &reply); } int sayhello_to(char *name) { unsigned iodata[512/4]; struct binder_io msg, reply; int ret; /* 構造binder_io */ bio_init(&msg, iodata, sizeof(iodata), 4); bio_put_uint32(&msg, 0); // strict mode header /* 放入參數 */ bio_put_string16_x(&msg, name); /* 調用binder_call遠程調用Server的sayhello_to函數 */ if (binder_call(g_bs, &msg, &reply, g_handle, HELLO_SVR_CMD_SAYHELLO_TO)) return 0; /* 從reply中解析出返回值 */ ret = bio_get_uint32(&reply); binder_done(g_bs, &msg, &reply); return ret; } /* ./test_client hello * ./test_client hello <name> */ int main(int argc, char **argv) { int fd; struct binder_state *bs; uint32_t svcmgr = BINDER_SERVICE_MANAGER; uint32_t handle; int ret; if (argc < 2){ fprintf(stderr, "Usage:\n"); fprintf(stderr, "%s hello\n", argv[0]); fprintf(stderr, "%s hello <name>\n", argv[0]); return -1; } /*打開binder驅動*/ bs = binder_open(128*1024); if (!bs) { fprintf(stderr, "failed to open binder driver\n"); return -1; } g_bs = bs; /* 向ServiceManager查詢hello服務 */ handle = svcmgr_lookup(bs, svcmgr, "hello"); if (!handle) { fprintf(stderr, "failed to get hello service\n"); return -1; } g_handle = handle; /* send data to server */ if (argc == 2) { sayhello(); } else if (argc == 3) { ret = sayhello_to(argv[2]); fprintf(stderr, "get ret of sayhello_to = %d\n", ret); } binder_release(bs, handle); return 0; }
這里需要注意的一點是,不管我們的Server進程還是Client進程,他們在遠程調用其他進程的函數的時候,都是通過binder_call這個函數來實現的,下面我們就來分析一下這個函數。
int binder_call(struct binder_state *bs, struct binder_io *msg, struct binder_io *reply, uint32_t target, uint32_t code) { int res; /*構造參數*/ struct binder_write_read bwr; struct { uint32_t cmd; struct binder_transaction_data txn; } __attribute__((packed)) writebuf; unsigned readbuf[32]; if (msg->flags & BIO_F_OVERFLOW) { fprintf(stderr,"binder: txn buffer overflow\n"); goto fail; } writebuf.cmd = BC_TRANSACTION; writebuf.txn.target.handle = target; writebuf.txn.code = code; writebuf.txn.flags = 0; writebuf.txn.data_size = msg->data - msg->data0; writebuf.txn.offsets_size = ((char*) msg->offs) - ((char*) msg->offs0); writebuf.txn.data.ptr.buffer = (uintptr_t)msg->data0; writebuf.txn.data.ptr.offsets = (uintptr_t)msg->offs0; bwr.write_size = sizeof(writebuf); bwr.write_consumed = 0; bwr.write_buffer = (uintptr_t) &writebuf; hexdump(msg->data0, msg->data - msg->data0); for (;;) { bwr.read_size = sizeof(readbuf); bwr.read_consumed = 0; bwr.read_buffer = (uintptr_t) readbuf; /*調用ioctl發送數據*/ res = ioctl(bs->fd, BINDER_WRITE_READ, &bwr); if (res < 0) { fprintf(stderr,"binder: ioctl failed (%s)\n", strerror(errno)); goto fail; } /*解析返回的數據*/ res = binder_parse(bs, reply, (uintptr_t) readbuf, bwr.read_consumed, 0); if (res == 0) return 0; if (res < 0) goto fail; } fail: memset(reply, 0, sizeof(*reply)); reply->flags |= BIO_F_IOERROR; return -1; }
其中第一個參數用來描述當前binder的狀態,是調用binder_open時返回的,第二個參數是要發送的數據,第三個參數用來保存返回的數據,第四非參數是數據發送的目的地,即向誰發送數據,第五個參數是要調用的遠程的函數的約定好的代號。
3.3.ServiceManager進程

緊接着我們就來分析一下它的main函數,和其他一些主要的函數
int main(int argc, char **argv) { struct binder_state *bs; /*打開binder驅動*/ bs = binder_open(128*1024); if (!bs) { ALOGE("failed to open binder driver\n"); return -1; } /*告訴驅動,我是大管家*/ if (binder_become_context_manager(bs)) { ALOGE("cannot become context manager (%s)\n", strerror(errno)); return -1; } svcmgr_handle = BINDER_SERVICE_MANAGER; /*進入無限循環,處理client端發來的請求*/ binder_loop(bs, svcmgr_handler); return 0; }
分析一下binder_become_context_manager這個函數,看一下是怎樣向驅動注冊為大管家的
int binder_become_context_manager(struct binder_state *bs) { /*通過ioctl,傳遞BINDER_SET_CONTEXT_MGR指令*/ return ioctl(bs->fd, BINDER_SET_CONTEXT_MGR, 0); }
整個流程的時序如下圖所示
總結一下,整個binder遠程過程調用,就是首先大管家ServiceManager告訴binder驅動,我現在是大管家了,然后Server進程和Client進程通過這個大管家互相了解了之后,Client進程就可以遠程調用Server進程的函數了。