目錄
說明
1. RDMA的學習環境搭建
RDMA需要專門的RDMA網卡或者InfiniBand卡才能使用,學習RDMA而又沒有這些硬件設備,可以使用一個軟件RDMA模擬環境,softiwarp ,
- 這是加載地址:https://github.com/zrlio/softiwarp
- 這是安裝教程:http://www.reflectionsofthevoid.com/2011/03/how-to-install-soft-iwarp-on-ubuntu.html
更多的rdmacm實例:,
- https://github.com/tarickb/the-geek-in-the-corner
需要注意的是,這個例子里面缺省用的是IPv6連接,如果希望在IPv4環境下測試,需要先改代碼用IPv4地址。
2. RDMA與socket的類比
和Socket連接類似,RDMA連接也分為可靠連接和不可靠連接。然而也不完全相同,Socket的可靠連接就是TCP連接,是流式的;不可靠連接也就是UDP,是消息式的。對於RDMA來說,無論是可靠連接和不可靠連接,都是消息式的。
編程角度看,RDMA代碼也分為Server端,Client端,也有bind, listen, connect, accept,等動作,然而細節上仍有不少區別。
rdma_cm API說明:
https://linux.die.net/man/3/rdma_create_id (推薦)
https://www.ibm.com/docs/en/aix/7.2?topic=operations-rdma-listen (內容少)
3. RDMA服務器的代碼流程
main()
{
-
rdma_create_event_channel
這一步是創建一個event channel,event channel是RDMA設備在操作完成后,或者有連接請求等事件發生時,用來通知應用程序的通道。其內部就是一個file descriptor, 因此可以進行poll等操作。 -
rdma_create_id(channel, **id,……)
這一步創建一個rdma_cm_id, 概念上等價與socket編程時的listen socket。 -
rdma_bind_addr(id,addr)
和socket編程一樣,也要先綁定一個本地的地址和端口,以進行listen操作。 -
rdma_listen(id,block)
開始偵聽客戶端的連接請求 -
rdma_get_cm_event
這個調用就是作用在第一步創建的event channel上面,要從event channel中獲取一個事件。這是個阻塞調用,只有有事件時才會返回。在一切正常的情況下,函數返回時會得到一個 RDMA_CM_EVENT_CONNECT_REQUEST事件,也就是說,有客戶端發起連接了。
在事件的參數里面,會有一個新的rdma_cm_id傳入。這點和socket是不同的,socket只有在accept后才有新的socket fd創建。
on_event()
{
on_connect_request()//RDMA_CM_EVENT_CONNECT_REQUEST
{
build_context()
{
-
ibv_alloc_pd
創建一個protection domain。protection domain可以看作是一個內存保護單位,在內存區域和隊列直接建立一個關聯關系,防止未授權的訪問。
-
ibv_create_comp_channel
和之前創建的event channel類似,這也是一個event channel,但只用來報告完成隊列里面的事件。當完成隊列里有新的任務完成時,就通過這個channel向應用程序報告。
-
ibv_create_cq
創建完成隊列,創建時就指定使用第6步的channel。
}//--end build_context()
-
rdma_create_qp
創建一個queue pair, 一個queue pair包括一個發送queue和一個接收queue. 指定使用前面創建的cq作為完成隊列。該qp創建時就指定關聯到第6步創建的pd上。 -
ibv_reg_mr
注冊內存區域。RDMA使用的內存,必須事先進行注冊。這個是可以理解的,DMA的內存在邊界對齊,能否被swap等方面,都有要求。 -
rdma_accept
至此,做好了全部的准備工作,可以調用accept接受客戶端的這個請求了。 –:)長出一口氣 ~~ 且慢,
}
//--end on_connect_request()
-
rdma_ack_cm_event
對於每個從event channel得到的事件,都要調用ack函數,否則會產生內存泄漏。這一步的ack是對應第5步的get。每一次get調用,都要有對應的ack調用。 -
rdma_get_cm_event
繼續調用rdma_get_cm_event
, 一切正常的話我們此時應該得到 RDMA_CM_EVENT_ESTABLISHED 事件,表示連接已經建立起來。不需要做額外的處理,直接rdma_ack_cm_event
就行了
}//--end on_event()
終於可以開始進行數據傳輸了 ==== (如何傳輸下篇再說)
4. 關閉連接
-
斷開連接
當rdma_get_cm_event
返回RDMA_CM_EVENT_DISCONNECTED事件時,表示客戶端斷開了連接,server端要進行對應的清理。此時可以調用rdma_ack_cm_event
釋放事件資源。然后依次調用下面的函數,釋放連接資源,內存資源,隊列資源。 -
rdma_disconnect
-
rdma_destroy_qp
-
ibv_dereg_mr
-
rdma_destroy_id
釋放同客戶端連接的rdma_cm_id -
rdma_destroy_id
釋放用於偵聽的rdma_cm_id -
rdma_destroy_event_channel
釋放 event channel
}
// end main
實例
源碼地址- https://github.com/tarickb/the-geek-in-the-corner
Makefile
.PHONY: clean
CFLAGS := -Wall -g
LDLIBS := ${LDLIBS} -lrdmacm -libverbs -lpthreadAPPS := server client
all: ${APPS}
clean:
rm -f ${APPS}
注意:makefile 沒有-L 指定lib的路徑,所以 -lrdmacm -libverbs -lpthread 對應的庫 librdmacm.so libibverbs.so libpthread.so 應放在默認的路徑下/usr/lib 或/usr/lib64
server、clicent源碼下載:https://download.csdn.net/download/bandaoyu/18630109
服務端server.c
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
#include <unistd.h>
#include <rdma/rdma_cma.h>
#define TEST_NZ(x) do { if ( (x)) die("error: " #x " failed (returned non-zero)." ); } while (0)
#define TEST_Z(x) do { if (!(x)) die("error: " #x " failed (returned zero/null)."); } while (0)
const int BUFFER_SIZE = 1024;
struct context {
struct ibv_context *ctx;
struct ibv_pd *pd;
struct ibv_cq *cq;
struct ibv_comp_channel *comp_channel;
pthread_t cq_poller_thread;
};
struct connection {
struct ibv_qp *qp;
struct ibv_mr *recv_mr;
struct ibv_mr *send_mr;
char *recv_region;
char *send_region;
};
static void die(const char *reason);
static void build_context(struct ibv_context *verbs);
static void build_qp_attr(struct ibv_qp_init_attr *qp_attr);
static void * poll_cq(void *);
static void post_receives(struct connection *conn);
static void register_memory(struct connection *conn);
static void on_completion(struct ibv_wc *wc);
static int on_connect_request(struct rdma_cm_id *id);
static int on_connection(void *context);
static int on_disconnect(struct rdma_cm_id *id);
static int on_event(struct rdma_cm_event *event);
static struct context *s_ctx = NULL;
int main(int argc, char **argv)
{
#if _USE_IPV6
struct sockaddr_in6 addr;
#else
struct sockaddr_in addr;
#endif
struct rdma_cm_event *event = NULL;
struct rdma_cm_id *listener = NULL;
struct rdma_event_channel *ec = NULL;
uint16_t port = 0;
memset(&addr, 0, sizeof(addr));
#if _USE_IPV6
addr.sin6_family = AF_INET6;
#else
addr.sin_family = AF_INET;
#endif
TEST_Z(ec = rdma_create_event_channel());
TEST_NZ(rdma_create_id(ec, &listener, NULL, RDMA_PS_TCP));
TEST_NZ(rdma_bind_addr(listener, (struct sockaddr *)&addr));
TEST_NZ(rdma_listen(listener, 10)); /* backlog=10 is arbitrary */
port = ntohs(rdma_get_src_port(listener)); //rdma_get_src_port 返回listener對應的tcp 端口
printf("listening on port %d.\n", port);
while (rdma_get_cm_event(ec, &event) == 0) {
struct rdma_cm_event event_copy;
memcpy(&event_copy, event, sizeof(*event));
rdma_ack_cm_event(event);
if (on_event(&event_copy))
break;
}
rdma_destroy_id(listener);
rdma_destroy_event_channel(ec);
return 0;
}
void die(const char *reason)
{
fprintf(stderr, "%s\n", reason);
exit(EXIT_FAILURE);
}
void build_context(struct ibv_context *verbs)
{
if (s_ctx) {
if (s_ctx->ctx != verbs)
die("cannot handle events in more than one context.");
return;
}
s_ctx = (struct context *)malloc(sizeof(struct context));
s_ctx->ctx = verbs;
TEST_Z(s_ctx->pd = ibv_alloc_pd(s_ctx->ctx));
TEST_Z(s_ctx->comp_channel = ibv_create_comp_channel(s_ctx->ctx));
TEST_Z(s_ctx->cq = ibv_create_cq(s_ctx->ctx, 10, NULL, s_ctx->comp_channel, 0)); /* cqe=10 is arbitrary */
TEST_NZ(ibv_req_notify_cq(s_ctx->cq, 0)); #完成完成隊列與完成通道的關聯
TEST_NZ(pthread_create(&s_ctx->cq_poller_thread, NULL, poll_cq, NULL));
}
void build_qp_attr(struct ibv_qp_init_attr *qp_attr)
{
memset(qp_attr, 0, sizeof(*qp_attr));
qp_attr->send_cq = s_ctx->cq;
qp_attr->recv_cq = s_ctx->cq;
qp_attr->qp_type = IBV_QPT_RC;
qp_attr->cap.max_send_wr = 10;
qp_attr->cap.max_recv_wr = 10;
qp_attr->cap.max_send_sge = 1;
qp_attr->cap.max_recv_sge = 1;
}
void * poll_cq(void *ctx)
{
struct ibv_cq *cq;
struct ibv_wc wc;
while (1) {
TEST_NZ(ibv_get_cq_event(s_ctx->comp_channel, &cq, &ctx));
ibv_ack_cq_events(cq, 1);
TEST_NZ(ibv_req_notify_cq(cq, 0));
while (ibv_poll_cq(cq, 1, &wc))
on_completion(&wc);
}
return NULL;
}
void post_receives(struct connection *conn)
{
struct ibv_recv_wr wr, *bad_wr = NULL;
struct ibv_sge sge;
wr.wr_id = (uintptr_t)conn;
wr.next = NULL;
wr.sg_list = &sge;
wr.num_sge = 1;
sge.addr = (uintptr_t)conn->recv_region;
sge.length = BUFFER_SIZE;
sge.lkey = conn->recv_mr->lkey;
TEST_NZ(ibv_post_recv(conn->qp, &wr, &bad_wr));
}
void register_memory(struct connection *conn)
{
conn->send_region = malloc(BUFFER_SIZE);
conn->recv_region = malloc(BUFFER_SIZE);
TEST_Z(conn->send_mr = ibv_reg_mr(
s_ctx->pd,
conn->send_region,
BUFFER_SIZE,
0));
TEST_Z(conn->recv_mr = ibv_reg_mr(
s_ctx->pd,
conn->recv_region,
BUFFER_SIZE,
IBV_ACCESS_LOCAL_WRITE));
}
void on_completion(struct ibv_wc *wc)
{
if (wc->status != IBV_WC_SUCCESS)
die("on_completion: status is not IBV_WC_SUCCESS.");
if (wc->opcode & IBV_WC_RECV) {
struct connection *conn = (struct connection *)(uintptr_t)wc->wr_id;
printf("received message: %s\n", conn->recv_region);
} else if (wc->opcode == IBV_WC_SEND) {
printf("send completed successfully.\n");
}
}
int on_connect_request(struct rdma_cm_id *id)
{
struct ibv_qp_init_attr qp_attr;
struct rdma_conn_param cm_params;
struct connection *conn;
printf("received connection request.\n");
build_context(id->verbs);
build_qp_attr(&qp_attr);
TEST_NZ(rdma_create_qp(id, s_ctx->pd, &qp_attr));
id->context = conn = (struct connection *)malloc(sizeof(struct connection));
conn->qp = id->qp;
register_memory(conn);
post_receives(conn);
memset(&cm_params, 0, sizeof(cm_params));
TEST_NZ(rdma_accept(id, &cm_params));
return 0;
}
int on_connection(void *context)
{
struct connection *conn = (struct connection *)context;
struct ibv_send_wr wr, *bad_wr = NULL;
struct ibv_sge sge;
snprintf(conn->send_region, BUFFER_SIZE, "message from passive/server side with pid %d", getpid());
printf("connected. posting send...\n");
memset(&wr, 0, sizeof(wr));
wr.opcode = IBV_WR_SEND;
wr.sg_list = &sge;
wr.num_sge = 1;
wr.send_flags = IBV_SEND_SIGNALED;
sge.addr = (uintptr_t)conn->send_region;
sge.length = BUFFER_SIZE;
sge.lkey = conn->send_mr->lkey;
TEST_NZ(ibv_post_send(conn->qp, &wr, &bad_wr));
return 0;
}
int on_disconnect(struct rdma_cm_id *id)
{
struct connection *conn = (struct connection *)id->context;
printf("peer disconnected.\n");
rdma_destroy_qp(id);
ibv_dereg_mr(conn->send_mr);
ibv_dereg_mr(conn->recv_mr);
free(conn->send_region);
free(conn->recv_region);
free(conn);
rdma_destroy_id(id);
return 0;
}
int on_event(struct rdma_cm_event *event)
{
int r = 0;
if (event->event == RDMA_CM_EVENT_CONNECT_REQUEST)
r = on_connect_request(event->id);
else if (event->event == RDMA_CM_EVENT_ESTABLISHED)
r = on_connection(event->id->context);
else if (event->event == RDMA_CM_EVENT_DISCONNECTED)
r = on_disconnect(event->id);
else
die("on_event: unknown event.");
return r;
}
客戶端client.c
#include <netdb.h>
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
#include <unistd.h>
#include <rdma/rdma_cma.h>
#define TEST_NZ(x) do { if ( (x)) die("error: " #x " failed (returned non-zero)." ); } while (0)
#define TEST_Z(x) do { if (!(x)) die("error: " #x " failed (returned zero/null)."); } while (0)
const int BUFFER_SIZE = 1024;
const int TIMEOUT_IN_MS = 500; /* ms */
struct context {
struct ibv_context *ctx;
struct ibv_pd *pd;
struct ibv_cq *cq;
struct ibv_comp_channel *comp_channel;
pthread_t cq_poller_thread;
};
struct connection {
struct rdma_cm_id *id;
struct ibv_qp *qp;
struct ibv_mr *recv_mr;
struct ibv_mr *send_mr;
char *recv_region;
char *send_region;
int num_completions;
};
static void die(const char *reason);
static void build_context(struct ibv_context *verbs);
static void build_qp_attr(struct ibv_qp_init_attr *qp_attr);
static void * poll_cq(void *);
static void post_receives(struct connection *conn);
static void register_memory(struct connection *conn);
static int on_addr_resolved(struct rdma_cm_id *id);
static void on_completion(struct ibv_wc *wc);
static int on_connection(void *context);
static int on_disconnect(struct rdma_cm_id *id);
static int on_event(struct rdma_cm_event *event);
static int on_route_resolved(struct rdma_cm_id *id);
static struct context *s_ctx = NULL;
int main(int argc, char **argv)
{
struct addrinfo *addr;
struct rdma_cm_event *event = NULL;
struct rdma_cm_id *conn= NULL;
struct rdma_event_channel *ec = NULL;
if (argc != 3)
die("usage: client <server-address> <server-port>");
TEST_NZ(getaddrinfo(argv[1], argv[2], NULL, &addr));
TEST_Z(ec = rdma_create_event_channel());
TEST_NZ(rdma_create_id(ec, &conn, NULL, RDMA_PS_TCP));
TEST_NZ(rdma_resolve_addr(conn, NULL, addr->ai_addr, TIMEOUT_IN_MS));
freeaddrinfo(addr);
while (rdma_get_cm_event(ec, &event) == 0) {
struct rdma_cm_event event_copy;
memcpy(&event_copy, event, sizeof(*event));
rdma_ack_cm_event(event);
if (on_event(&event_copy))
break;
}
rdma_destroy_event_channel(ec);
return 0;
}
void die(const char *reason)
{
fprintf(stderr, "%s\n", reason);
exit(EXIT_FAILURE);
}
void build_context(struct ibv_context *verbs)
{
if (s_ctx) {
if (s_ctx->ctx != verbs)
die("cannot handle events in more than one context.");
return;
}
s_ctx = (struct context *)malloc(sizeof(struct context));
s_ctx->ctx = verbs;
TEST_Z(s_ctx->pd = ibv_alloc_pd(s_ctx->ctx));
TEST_Z(s_ctx->comp_channel = ibv_create_comp_channel(s_ctx->ctx));
TEST_Z(s_ctx->cq = ibv_create_cq(s_ctx->ctx, 10, NULL, s_ctx->comp_channel, 0)); /* cqe=10 is arbitrary */
TEST_NZ(ibv_req_notify_cq(s_ctx->cq, 0));
TEST_NZ(pthread_create(&s_ctx->cq_poller_thread, NULL, poll_cq, NULL));
}
void build_qp_attr(struct ibv_qp_init_attr *qp_attr)
{
memset(qp_attr, 0, sizeof(*qp_attr));
qp_attr->send_cq = s_ctx->cq;
qp_attr->recv_cq = s_ctx->cq;
qp_attr->qp_type = IBV_QPT_RC;
qp_attr->cap.max_send_wr = 10;
qp_attr->cap.max_recv_wr = 10;
qp_attr->cap.max_send_sge = 1;
qp_attr->cap.max_recv_sge = 1;
}
void * poll_cq(void *ctx)
{
struct ibv_cq *cq;
struct ibv_wc wc;
while (1) {
TEST_NZ(ibv_get_cq_event(s_ctx->comp_channel, &cq, &ctx));
ibv_ack_cq_events(cq, 1);
TEST_NZ(ibv_req_notify_cq(cq, 0));
while (ibv_poll_cq(cq, 1, &wc))
on_completion(&wc);
}
return NULL;
}
void post_receives(struct connection *conn)
{
struct ibv_recv_wr wr, *bad_wr = NULL;
struct ibv_sge sge;
wr.wr_id = (uintptr_t)conn;
wr.next = NULL;
wr.sg_list = &sge;
wr.num_sge = 1;
sge.addr = (uintptr_t)conn->recv_region;
sge.length = BUFFER_SIZE;
sge.lkey = conn->recv_mr->lkey;
TEST_NZ(ibv_post_recv(conn->qp, &wr, &bad_wr));
}
void register_memory(struct connection *conn)
{
conn->send_region = malloc(BUFFER_SIZE);
conn->recv_region = malloc(BUFFER_SIZE);
TEST_Z(conn->send_mr = ibv_reg_mr(
s_ctx->pd,
conn->send_region,
BUFFER_SIZE,
0));
TEST_Z(conn->recv_mr = ibv_reg_mr(
s_ctx->pd,
conn->recv_region,
BUFFER_SIZE,
IBV_ACCESS_LOCAL_WRITE));
}
int on_addr_resolved(struct rdma_cm_id *id)
{
struct ibv_qp_init_attr qp_attr;
struct connection *conn;
printf("address resolved.\n");
build_context(id->verbs);
build_qp_attr(&qp_attr);
TEST_NZ(rdma_create_qp(id, s_ctx->pd, &qp_attr));
id->context = conn = (struct connection *)malloc(sizeof(struct connection));
conn->id = id;
conn->qp = id->qp;
conn->num_completions = 0;
register_memory(conn);
post_receives(conn);
TEST_NZ(rdma_resolve_route(id, TIMEOUT_IN_MS));
return 0;
}
void on_completion(struct ibv_wc *wc)
{
struct connection *conn = (struct connection *)(uintptr_t)wc->wr_id;
if (wc->status != IBV_WC_SUCCESS)
die("on_completion: status is not IBV_WC_SUCCESS.");
if (wc->opcode & IBV_WC_RECV)
printf("received message: %s\n", conn->recv_region);
else if (wc->opcode == IBV_WC_SEND)
printf("send completed successfully.\n");
else
die("on_completion: completion isn't a send or a receive.");
if (++conn->num_completions == 2)
rdma_disconnect(conn->id);
}
int on_connection(void *context)
{
struct connection *conn = (struct connection *)context;
struct ibv_send_wr wr, *bad_wr = NULL;
struct ibv_sge sge;
snprintf(conn->send_region, BUFFER_SIZE, "message from active/client side with pid %d", getpid());
printf("connected. posting send...\n");
memset(&wr, 0, sizeof(wr));
wr.wr_id = (uintptr_t)conn;
wr.opcode = IBV_WR_SEND;
wr.sg_list = &sge;
wr.num_sge = 1;
wr.send_flags = IBV_SEND_SIGNALED;
sge.addr = (uintptr_t)conn->send_region;
sge.length = BUFFER_SIZE;
sge.lkey = conn->send_mr->lkey;
TEST_NZ(ibv_post_send(conn->qp, &wr, &bad_wr));
return 0;
}
int on_disconnect(struct rdma_cm_id *id)
{
struct connection *conn = (struct connection *)id->context;
printf("disconnected.\n");
rdma_destroy_qp(id);
ibv_dereg_mr(conn->send_mr);
ibv_dereg_mr(conn->recv_mr);
free(conn->send_region);
free(conn->recv_region);
free(conn);
rdma_destroy_id(id);
return 1; /* exit event loop */
}
int on_event(struct rdma_cm_event *event)
{
int r = 0;
if (event->event == RDMA_CM_EVENT_ADDR_RESOLVED)
r = on_addr_resolved(event->id);
else if (event->event == RDMA_CM_EVENT_ROUTE_RESOLVED)
r = on_route_resolved(event->id);
else if (event->event == RDMA_CM_EVENT_ESTABLISHED)
r = on_connection(event->id->context);
else if (event->event == RDMA_CM_EVENT_DISCONNECTED)
r = on_disconnect(event->id);
else
die("on_event: unknown event.");
return r;
}
int on_route_resolved(struct rdma_cm_id *id)
{
struct rdma_conn_param cm_params;
printf("route resolved.\n");
memset(&cm_params, 0, sizeof(cm_params));
TEST_NZ(rdma_connect(id, &cm_params));
return 0;
}
更多講解教程
https://thegeekinthecorner.wordpress.com/category/infiniband-verbs-rdma/
https://thegeekinthecorner.wordpress.com/2010/09/28/rdma-read-and-write-with-ib-verbs/
http://www.hpcadvisorycouncil.com/pdf/building-an-rdma-capable-application-with-ib-verbs.pdf
LINUX 編程例子
https://community.mellanox.com/s/topic/0TO50000000g1zhGAA/linux-programming?tabset-dea0d=2
4、rdma_listen()和rdma_request()是否要在一起執行?
不一定。rdma_listen()不是阻塞式方法,只是監聽連接請求,不會產生新的連接rdma_cm_id,如果想產生新的rdma_cm_id,需要執行rdma_get_cm_event()獲取RDMA_CM_EVENT_CONNECT_REQUEST事件,或者執行rdma_get_request()獲取連接請求事件。所以rdma_get_cm_event()和rdma_get_request()的原理是相同的,都是為了獲取RDMA_CM_EVENT_CONNECT_REQUEST事件,表示有客戶端連接,然后會產生新的rdma_cm_id,也就是連接rdma_cm_id。對於傳統的TCP/IP通信,listen階段只是監聽,accept之后才產生新的socket fd,但是RDMA中,在listen階段即可產生新的rdma_cm_id,更為准確來說,是在listen調用之后,只要有RDMA_CM_EVENT_CONNECT_REQUEST事件產生,就會生成新的rdma_cm_id。嘗試過,如果只有listen,沒有request,就會報錯。可見,listen之后需要有相應的處理機制才可以。
5、為什么要設置IBV_SEND_INLINE?
int send_flags,描述了WR的屬性,其值為0或者一個或多個flags的按位異或。
IBV_SEND_FENCE - 為此WR設置圍欄指示器。這意味着這個WR的處理將被阻止,直到所有之前發布的RDMA Read和Atomic WR都將被完成。僅對運輸服務類型為IBV_QPT_RC的QP有效
IBV_SEND_SIGNALED - 設置此WR的完成通知指示符。這意味着如果QP是使用sq_sig_all = 0創建的,那么當WR的處理結束時,將會產生一個工作完成。如果QP是使用sq_sig_all = 1創建的,則不會對此標志產生任何影響
IBV_SEND_SOLICITED - 為此WR設置請求事件指示器。這意味着,當這個WR中的消息將在遠程QP中結束時,將會創建一個請求事件,如果在遠程側,用戶正在等待請求事件,它將被喚醒。與僅用於發送和RDMA寫入的立即操作碼相關
IBV_SEND_INLINE - sg_list中指定的內存緩沖區將內聯放置在發送請求中。這意味着低級驅動程序(即CPU)將讀取數據而不是RDMA設備。這意味着L_Key將不會被檢查,實際上這些內存緩沖區甚至不需要被注冊,並且在ibv_post_send()結束之后可以立即重用。僅對發送和RDMA寫操作碼有效。由於在該代碼中沒有涉及到key的交換,所以也無法使用RDMA傳輸,所以還是使用了CPU讀取數據,既然是CPU讀取,那么也就不需要注冊內存緩沖區了,這個標記只能用於發送和寫操作。
6、操作碼和對應的QP傳輸服務類型的關系?